Mitigating duplicate page content on dynamic sites such as social networks and blogs.
-
Hello,
I recently did an SEOMoz crawl for a client site. As it typical, the most common errors were duplicate page title and duplicate content. The client site is a custom social network for researchers. Most of the pages that showing as duplicate are simple variations of each user's profile such as comment sections, friends pages, and events.
So my question is how can we limit duplicate content errors for a complex site like this. I already know about the rel canonical tag, and rel next tag, but I'm not sure if either of these will do the job. Also, I don't want to lose potential links/link juice for good pages.
Are there ways of using the "noindex" tag in batches? For instance: noindex all urls containing this character? Or do most CMS allow this to be done systematically?
Anyone with experience doing SEO for a custom Social Network or Forum, please advise.
Thanks!!!
-
You can disallow these sections in your robots.txt to cut out all these. However, they can still show are URL only in search results. In order to completely remove them, you need to add noindex tags to the header of each pages. I'm assuming that these are created dynamically with a template that you should be able to add the nofollow. But be careful that you only add them to the pages you want!