Robots.txt
-
I have a client who after designer added a robots.txt file has experience continual growth of urls blocked by robots,tx but now urls blocked (1700 aprox urls) has surpassed those indexed (1000). Surely that would mean all current urls are blocked (plus some extra mysterious ones). However pages still listing in Google and traffic being generated from organic search so doesnt look like this is the case apart from the rather alarming webmaster tools report
any ideas whats going on here ?
cheers
dan
-
The url's may be blocked for indexing, but if they were indexed at one time there is a lag time before they are typically removed from the organics. Is this is what you were asking?
-
Thanks for taking time to comment Kevin
No - what i'm worried about is if more urls are blocked than indexed doesnt that mean that all site pages are blocked (when at least 50 of them dont want to be blocked since want to be crawled/indexed) ? in addition how can more urls be blocked than those that exist/indexed ?
cheers
Dan -
Do you use any type of dynamic parameters?
-
Not as far as I'm aware