Help - we're blocking SEOmoz cawlers
-
We have a fairly stringent blacklist and by the looks of our crawl reports we've begin unintentionally blocking the SEOmoz crawler.
can you guys let me know the useragent string and anything else I need to enable mak sure you're crawlers are whitelisted?
Cheers!
-
I have it as "rogerbot"
<code>User-agent: rogerbot Disallow: /</code>Access-log: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
-
Thanks Gerd, though looks like your robots.txt is a disallow rule, when I'm looking to let it through.
I'll give this one a try: Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
-
Still way to early for me ;-). I block specific robots rather than excluding all but a few.
I have not tried the following (but think/hope it will work) - this should block all robots, but allow SeoMoz and Google:
User-agent: *
Disallow: /User-agent: rogerbot
Disallow:User-agent: Google
Disallow:You would already have something like this in your robots.txt (unless your block occurs on a network/firewall level).
-
We maintain a crawler (and others) blacklist to control server loads, so I'm just looking for the useragent string I can add to the white list. this one should do the trick;
Mozilla/5.0 (compatible; rogerBot/1.0; UrlCrawler; http://www.seomoz.org/dp/rogerbot)
-
Hi! Did this work for you, or would you like our help team to lend a hand?
-
Hi Keri,
Still testing, though i see no reason why this shouldn't work so will close the QA ticket.
cheers!