How can I deal with tag page duplicate issues
-
The Moz crawler reported some dupliated issues. Many of them have to do with tags.
Each tag has a link, and as some articles are under several tags, these come up as duplicate content.I read Dr Peter's piece on Canonical stuff, but it's not clear to me if any of these are the solution. Perhaps the solution lies somewhere else? Maybe I need to block the robots from these urls (But that seems counter-SEO-productive)
Thanks
Kovacs -
You are correct on blocking them in robots.txt This is the best way and while it seems counter productive for SEO, it's really not. You want to prevent too much dupe content from appearing. Say you have a page:
If the content on that is added to two tags:
yoururl.com/tag1/ (content appears as an archive item under this)
yoururl.com/tag2/ (same content, same type of archive item)
This is not as good for your SEO - it's better to block these tags (or not use them at all. They're only good for on-page user experience if your users are clicking them.)
-
Agree totally with Matt-Antonino.
This Moz post, by Dan Shure, was the one I found most helpful when trying to figure out how to index (or not index) pages and posts. http://moz.com/blog/setup-wordpress-for-seo-success
The post speaks specifically to Wordpress, but concepts are transferable to any implementation.