Rogerbot's crawl behaviour vs google spiders and other crawlers - disparate results have me confused.
-
I'm curious as to how accurately rogerbot replicates google's searchbot
I've currently got a site which is reporting over 200 pages of duplicate/titles content in moz tools. The pages in question are all session IDs and have been blocked in the robot.txt (about 3 weeks ago), however the errors are still appearing.
I've also crawled the page using screaming frog SEO spider. According to Screaming Frog, the offending pages have been blocked and are not being crawled. Webmaster tools is also reporting no crawl errors.
Is there something I'm missing here? Why would I receive such different results. Which one's should I trust? Does rogerbot ignore robot.txt? Any suggestions would be appreciated.
-
I've see similar concerns from others, it seems "rogerbot" does ignore certain things that other bots consider.
Don't worry about it, if it's not being flagged in WMT it shouldn't be an issue.
Take Roger as a guide rather than an iron fist bot like googlebot.
-
Thanks for your response. I was beginning to think this question had been left to rot.
I'm not getting any errors in WMT. What is concerning is that Roger is returning almost 300 errors of dupe content, which is obviously a problem. Screaming frog is no longer finding the pages (they've been blocked in the robot.txt) I guess what I'm trying to ask here is how can I be sure that my dupe content has been effectively blocked from google's spider.
Is there anyway to check?
Thanks for your help.