Robots.txt crawling URL's we dont want it to
-
Hello
We run a number of websites and underneath them we have testing websites (sub-domains), on those sites we have robots.txt disallowing everything. When I logged into MOZ this morning I could see the MOZ spider had crawled our test sites even though we have said not to.
Does anyone have an ideas how we can stop this happening?
-
Hi there!
Thanks for reaching out to us! I am sorry if Roger is somehow not following your robots.txt directives. To ensure that Roger doesn't crawl your site you can put the following directive above your general directives in your robots.txt:
User-agent: rogerbot
Dissallow: /Once this is in place you should find our crawler to be a lot more obedient towards your site.
Hope this helps, please let us know if you have any more questions about our crawler.
Best,
Peter
Moz Help Team.