Should search pages be disallowed in robots.txt?
-
The SEOmoz crawler picks up "search" pages on a site as having duplicate page titles, which of course they do. Does that mean I should put a "Disallow: /search" tag in my robots.txt? When I put the URL's into Google, they aren't coming up in any SERPS, so I would assume everything's ok. I try to abide by the SEOmoz crawl errors as much as possible, that's why I'm asking. Any thoughts would be helpful.
Thanks!
-
What engine are you running and what is the exact url you see when you perform search?
-
Using Google and I'm entering the exact URL in the search. No results are found. I know this means it's not indexed, but why does the SEOmoz crawler pick it up?