Robots.txt
-
www.mywebsite.com**/details/**home-to-mome-4596
www.mywebsite.com**/details/**home-moving-4599
www.mywebsite.com**/details/**1-bedroom-apartment-4601
www.mywebsite.com**/details/**4-bedroom-apartment-4612 We have so many pages like this, we do not want to Google crawl this pages
So we added the following code to Robots.txt
User-agent: Googlebot
Disallow: /details/
This code is correct?
-
Looks good but you only want to exclude google from crawling those pages? If you want to exclude other search engines, you could use:
User-agent: *
instead of
User-agent: Googlebot
-
Looks correct. Great reccomendation by Chris. The * (wildcard) just means "all".
-
and this disallows everything in the /details/ folder so if there were some exceptions to the rule (some pages or sub folders in that folder) you would need to add some allow directives, or make a more specific disallow(s)