Why is blocking the SEOmoz crawler considered a red "error?"
-
-
wheres the attached image? its only an error b/c then they cant crawl and build data but thats just a guess
-
Hi, i can´t see the attached image, upload it on any imageshack or something like that and share here the url, and i will try to help you.
If the semozbot find errors on crawling,this mean your site have failures on programming of your site, it fails the " search engine friendly " optimisation.
send me image, i will try to help you.
-
Sorry about that. I uploaded it 3 times and finally noticed the "Update" button after uploading on the 3rd attempt.

-
It seems to me that it should be a "Notice" not an "Error." I am intentionally blocking bots from a defunct directory. Keeping SEOmoz out of an old directory should not (does not?) affect SEO, you know?
-
So,
about 4xx errors read this article: http://webdesign.about.com/cs/http/p/http4xx.htm
for Seomoz crawler blocked by robots.txt , on this file, you have added two links, and are blocking the search engine robots to crawl/index this pages on their database.
about this error issue read here please: http://www.google.com/support/webmasters/bin/answer.py?answer=156449
hope helps,
thanks
-
I think because that section is labeled "crawl errors", an area blocked from crawling would be considered an error. I can see where you're coming from, but think of it as an error found with an attempt to crawl, not necessarily an error found in the site itself.