Will the Duplicate Content from a Website's Search Results effect my SEO?
-
So I've been told that search engines these days are "smart enough" to realize that the URL reading "..../catalogsearch/result/?q=whatever" isn't the "main" page of the content on it and wanted to get a better understanding of this.
Looking at my SEOmoz Campaigns I can see errors with duplicate content and page titles and the culprits are these search result pages. With people sharing with me that it's okay because the search engines realize its a search result and it won't effect the website optimization, and SEOmoz telling me i have errors based on it makes me wonder what is right.
If someone could shed a little light on duplicate content created by a website's search results, I would greatly appreciate it.
Thanks!
-
Some search engines may be "smart enough" but I wouldn't bet on it.
I would disallow internal search result pages within the robots.txt.
In your example you would just add:
Disallow: /catalogsearch/result/
That will make sure those pages stay out of the SERPs and there are no duplicate content issues.
-
That makes sense. I appreciate the response and direction.
-
That's absolutely the best policy. Never rely on flawed "might be able to/sort of" evaluations by search engines. And always keep your site's search system out of indexing.
-
Great, thanks for the response.
-
I have added the appropriate lines to the robot.txt file. If there is anything else I should be aware of regarding disallowing directories via robot.txt please let me know!
Thanks guys!
-
I would not use robots.txt to disallow, as then every link pointing to your search page will leak its link juice.
Duplicate content is not a problem unless you have a lot of it so that your site looks like it is a majority fo duplicate content. have one page that has small parts from other pages is not a problem.
The think with duplicate content is that only one page will get credit, it is not a penalty unless the vast majority fo pages are duplicate