Google indexing my website's Search Results pages. Should I block this?
-
After running the SEOmoz crawl test, i have a spreadsheet of 11,000 urls of which 6381 urls are search results pages from our website that have been indexed.
I know I've read that /search should be blocked from the engines, but can't seem to find that information at this point. Does anyone have facts behind why they should be blocked? Or not blocked?
-
well the simple answer for you is Google allocate a crawl budget based on multiple factors.
with your current setting the crawlers and wondering and going after these search page that add no value to the web. and losing alot of your budget on these search pages, where i would definitely direct these crawlers to crawl the content and update it whenever u add or update a page.
-
Jenny,
Take a look at this post in the forums on indexing issues with site search - http://www.seomoz.org/q/block-search-engines-from-urls-created-by-internal-search-engine.
Allowing site search to be indexed can result in a ton of duplicate content on your site. I recommend taking the meta noindex approach.
-
Since you already released these out to the wild, I would analyze which search results pages are bringing in traffic and use that analysis to create new category pages on your site. I would certainly block the search parameter in the Webmaster tools and in robots.txt.. Most internal search results pages have little content value and the engines now look at your site as a whole and if a certain percentage of the site is low quality, the whole site will be penalized.