Shawn Hogan of DigitalPoint wrote a blog entry named Google Not Interpreting robots.txt Consistently. He describes how he noticed that some of his pages were being crawled by GoogleBot, even though his robots.txt file specifically was blocking it. So he emailed Google, and they actually replied with the following message;
hile we normally don't review individual sites, we did examine your robots.txt file. Please be advised that it appears your Googlebot entry in your robots.txt file is overriding your generic User-Agent listing. We suggest you alter your robots.txt file by duplicating the forbidden paths under your Googlebot entry: User-agent: * Disallow: /tools/suggestion/? Disallow: /search.php Disallow: /go.php Disallow: /~shawn/scripts/ Disallow: /ads/User-agent: Googlebot Disallow: /~shawn/ebay_ Disallow: /tools/suggestion/? Disallow: /search.php Disallow: /go.php Disallow: /~shawn/scripts/ Disallow: /ads/
Once you've altered your robots.txt file, Google will find it automatically after we next crawl your site.
Fine, so Shawn can easily do that. It is not a major deal, a bug Google knows about in its robots.txt protocol. But what Shawn points out is that the Google Sitemaps robots.txt validator shows that his previous robots.txt file;
User-agent: *
Disallow: /tools/suggestion/?
Disallow: /search.php
Disallow: /go.php
Disallow: /~shawn/scripts/
Disallow: /ads/
User-agent: Googlebot
Disallow: /~shawn/ebay_
was actually validated that it would not crawl the /ads/ directory. The two are not consistent, and should be, obviously.
Forum discussion at DigitalPoint Forums.