Bots: How to use the robots.txt file
Robot.txt is a way for you to control what search bots (like google, yahoo, and bing) have access to. Please note that not all search bots (email farmers, malware bots, etc) will respect the rules placed in this file.
To use this, please create a file called “robots.txt” into the root web path of your domain (normally public_html).
Here are a few basic examples of what you can do with this file.
1) This will deny all bots from crawling your website.
User-agent: * Disallow: /
2) This will deny all bots from accessing the web folder called private
User-agent: * Disallow: /private/
3) This will deny all bots from accessing the web folders private and images
User-agent: * Disallow: /images/ Disallow: /private/
4) This will deny all bots from accessing the page nobots.html in your web root.
User-agent: * Disallow: /nobots.html
For more examples of what you can do with the robots.txt file, please see this Wiki Page.
search best web hosting January 23, 2014 at 5:11 pm
Thank you for your excellent explanation