Does SeoMoz realize about duplicated url blocked in robot.txt?
-
Hi there:
Just a newby question...
I found some duplicated url in the "SEOmoz Crawl diagnostic reports" that should not be there.
They are intended to be blocked by the web robot.txt file.
Here is an example url (joomla + virtuemart structure):
_http://www.domain.com/component/users/?view=registration_
and the here is the blocking content in the robots.txt file
User-agent: *
_ Disallow: /components/_
Question is:
Will this kind of duplicated url errors be removed from the error list automatically in the future?
Should I remember what errors should not really be in the error list?
What is the best way to handle this kind of errors?
Thanks and best regards
Franky
-
Don't be too worried about SEOMOZ's errors. Just be aware of them, and if you have done what you need to for the robots file in regards to S.E robots, they should take notice and there shouldn't be any issues. Always be sure to check GWT for errors, those are the ones you should fix asap.
-
Hello Franky,
Yes, our crawler obeys robots.txt files. If you recently made that change to your robots then it should reflect in your next crawl. If this error doesn't go away, feel free to let us know help@seomoz.org. Thanks for letting us know!
-Abe