Needs clarification: How "Disallow: /" works?
-
Hi all,
I need clarification on this. I have noticed we have given "Disallow: /" in one of our sub-directory beside homepage. So, how it going to work now? Will this "Disallow: /" at sub-directory level going to disallow only that directory or entire website?
If it is going to work for entire website; we have already given one more Disallow: / at homepage level blocking few folders. How it is going to handle with two Disallow: / commands?
Thanks
-
The directive that is literally "Disallow: /" will prevent crawling of all pages on your site, since technically, all page paths begin with a slash. Robots.txt files can only live at the root folder (subdirectory) of a site, so if you want to disallow a folder, you'll need to specify that with a directive like "Disallow: /folder-name/
-
This post is deleted! -
If you have concerns, I strongly recommend using Google Search Console to test URL use cases against your existing robots.txt file and before you do potential edits.
-
Hi vtmoz,
You've received some great responses! Did any of them help answer your question? If so, please mark one or more as a "good answer." And if not, please let us know how we can help. Thanks!
Christy