what is robots.txt and how to use it ?
The simplest robots.txt file uses two rules:
User-Agent: the robot the following rule applies to
Disallow: the pages you want to block
These two lines are considered a single entry in the file. You can include as many entries as you want. You can include multiple Disallow lines and multiple User-Agents in one entry.
What should be listed on the User-Agent line?
A user-agent is a specific search engine robot. The Web Robots Database lists many common bots. You can set an entry to apply to a specific bot (by listing the name) or you can set it to apply to all bots (by listing an asterisk). An entry that applies to all bots looks like this:
Google uses several different bots (user agents). The bot we use for our web search is Googlebot. Our other bots like Googlebot-Mobile and Googlebot-Image follow rules you set up for Googlebot, but you can set up additional rules for these specific bots as well.
What should be listed on the Disallow line?
The Disallow line lists the pages you want to block. You can list a specific URL or a pattern. The entry should begin with a forward slash (/).
To block the entire site, use a forward slash.
To block a directory and everything in it, follow the directory name with a forward slash.
To block a page, list the page.
URLs are case-sensitive. For instance, Disallow: /private_file.html would block http://www.example.com/private_file.html, but would allow http://www.example.com/Private_File.html.
Example of robot.txt:
# robots.txt file for http://www.templatesetc.com/
# 3/5/2007 12:23
# end of file