Why is noindex more effective than robots.txt?
-
In this post, http://www.seomoz.org/blog/restricting-robot-access-for-improved-seo, it mentions that the noindex tag is more effective than using robots.txt for keeping URLs out of the index. Why is this?
-
The disallow in robots.txt will prevent the bots from crawling that page, but will not prevent the page from appearing on SERPs. If a page with a lot of links to it is disallowed in the robots.txt, it may still appear on SERPs. I've seen this on a few of my own pages... and Google picks a weird title for the page...
If you put the meta noindex tag, Google will actively remove that page from their search results when they re-index that page.
Here was one webmaster central thread I found about it.
-
Good answer, also we have seen that sometimes bots come directly to your site via a link and do not always visit the robots.txt file and will therefore index the first page they come to.
Matt Cutts has said before that the only 100% fail safe way of blocking search engines indexing something will be to have it password protected
