"Issue: Duplicate Page Content " in Crawl Diagnostics - but these pages are noindex
-
Saw an issue back in 2011 about this and I'm experiencing the same issue. http://moz.com/community/q/issue-duplicate-page-content-in-crawl-diagnostics-but-these-pages-are-noindex
We have pages that are meta-tagged as no-everything for bots but are being reported as duplicate. Any suggestions on how to exclude them from the Moz bot?
-
Don't forget that Rogerbot (moz's crawler) is a robot and not an index like Google. Google used robots to gather the data but the results we see is an index. Rogerbot will crawl the pages regardless of noindex or nofollow.
Here is more info on RogerBot http://moz.com/help/pro/rogerbot-crawler
-
Thanks for the information on Rogerbot. I understand the difference between the bots from Google and Moz.
Some errors reported in Moz are not real. For example we use a responsive slider on the home page that generates the slides from specific pages. These pages are tagged to no-everything so as to be invisible to bots, yet they are generating errors in the reports.
Is there anyway to exclude some pages from the reports?
-
Technically that could be done in your robots.txt file but I wouldn't recommend that if you want Google to crawl them too. I'm not sure if Rogerbot can do that. Sorry I couldn't be more help.
If you don't get one of the staffers on here in the next few days, I would send a ticket to them for clarification.
If you decide to go with robots.txt here is a resource from Google on implementing and testing it. https://support.google.com/webmasters/answer/156449?hl=en