When rogerbot tried to crawl my site it gets a 404\. Why?
-
When rogerbot tries to craw my site it tries http://website.com. My website then tries to redirect to http://www.website.com and is throwing a 404 and ends up not getting crawled. It also throws a 404 when trying to read my robots.txt file for some reason. We allow rogerbot user agent so unsure whats happening here. Is there something weird going on when trying to access my site without the 'www' that is causing the 404? Any insight is helpful here.
Thanks,
-
The robots.txt 404 could be a temporary outage, but it's a bit hard to tell without being able to see the actual site and robots.txt. Try checking the site is up, and you can access the robots.txt then requesting a new Moz crawl...
I do have one client who insists on blocking everything and then allowing specific crawlers, and allowing rogerbot seems to have worked fine to date.
-
Hey Dan,
So that's the problem. Our site is up and i can manually navigate to anything including the robots.txt file. I've done this multiple times throughout the day and different days as well and manually triggered different Moz crawls at different times so i've ruled out an outage.