Robots.txt is a simple text file that sites in the root directory of your site. It tells “robots” (such as search engine spiders) which page to crawl on your site, which pages to ignore.
While not essential, the Robots.txt file gives you a lot of control over how Google and other search engines see your site. A better SEO is also needed to get organic traffic from search engines. One great step towards a better SEO is adding a robots.txt file to your blog.
What is Robots.txt File?
That means you can restrict any web page on your blog from web crawlers so that it can’t get indexed in search engines like your blog labels page, your demo page or any other pages that are not as important to get indexed.
Always remember that search crawlers scan the robots.txt file before crawling on any web page. It tells the search engines crawlers about what page to crawl and what not to. In Blogger (Blogspot) we have the option to customize the robots.txt file according to our needs.
Uses of Robots.txt
Robots.txt is not an essential document for a website. Your site can rank and grow perfectly well without this file. However, using the Robots.txt does offer some benefits:
Discourage bots from crawling private folders
Although not perfect, disallowing bots from crawling private folders will make them much harder to index – at least by legitimate bots (such as search engine spiders).
Control resource usage
Every time a bot crawls your site, it drains your bandwidth and server resources – resources that would be better spent on real visitors. For sites with a lot of content, this can escalate costs and give real visitors a poor experience. You can use Robots.txt to block access to scripts, unimportant images, etc. to conserve resources.
Prioritize important pages
You want search engine spiders to crawl the important pages on your site (like content pages), not waste resources digging through useless pages (such as results from search queries). By blocking off such useless pages, you can prioritize which pages bots focus on.