Warnings on Pages excluded from Search Engines
-
I am new to this, so my question may seem a little rookie type...
When looking at my crawl diagnostic errors there are 1604 warnings for "302 redirects". Of those 1604 warnings 1500 of them are for the same page with different product ID's on them such as: www.soccerstop.com/EMailproduct.aspx?productid=999
www.soccerstop.com/EMailproduct.aspx?productid=998In our robots.txt file we have Disallow: /emailproduct.aspx
Wouldn't that take care of this problem? If so, why is it still giving me these warning errors? It does take into account our robots.txt file when generating this report does it not?
Thanks for any help you can provide.
James -
Try this to block pages with parameters
Disallow: /emailproduct.aspx?*
You can also add a robots nofollow, noindex on the EMailproduct.aspx page if you are trying to block it
You can do parameter handling in Google Webmaster Tools > site configutation > URL Parameters as well to tell Google how to deal with the parameters.
Also, you should always do 301 directs, not 302. Good luck.
-
Hi Nigel,
As I understand it, the warnings in the crawl report are simply intended to alert you to things that you may not otherwise be aware of on the site. Basically, it is telling you that Roger encountered a 302 (temporary) redirect when trying to access those URL's.
If your robots.txt is functioning the way you intended, then you should see this in the "blocked by robots.txt" section of the Crawl Notices. If this is not the case, then it would indicate that perhaps your robots.txt is not correctly configured.
If you want to know exactly how Rogerbot interacts with the files on your site when crawling, you could email the help team help [at] seomoz.org and ask.
There is a very good post from Lindsay on Robot access in the SEOmoz Blog which is definitely worth a read.
Hope that helps,