Crawl Diagnostics Summary Problem
-
We added our website a Robots.txt file and there are pages blocked by robots.txt. Crawl Diagnostics Summary page shows there is no page blocked by Robots.txt. Why?
-
I am guessing here, but Moz crawler does not respect your robots.txt file. Instead, if you want pages not to be crawled, try using the meta robots noindex for a change and see what happens.
-
Thanks Federico,
Can we use meta robots noindex and robots.txt together?
-
Hey there,
Thanks for the question. How you have your robots.txt set is actually preventing all bots from even touching on those pages, not just the engines.
If you had a directive allowing RogerBot access to those pages it would be able to touch on them and register that they are blocked from the Search Engines in the robots.txt.
Since our crawler strictly adheres to the robots.txt file you won't have anything populated there.
I hope that makes sense. Feel free to reach out if you need more information.

Cheers,
Joel.