Robots.txt question
-
What is this robots.txt telling the search engines?
User-agent: * Disallow: /stats/ -
Assuming that this is the entire contents of this file: It says that no robot (search engine spider, other crawler, etc.) should visit or index anything in the /stats/ directory or any directories inside of it.
More info available here: http://www.robotstxt.org/robotstxt.html
-
Thanks- wanted to make sure all was copacetic there. I'm assuming that it's good practice to disallow access to stats and won't impact the site negatively?
-
Hey Mark,
It's good practice to disallow access to any folder/content you don't want indexed as well as anything that has any security involved (login's, databases etc).
It will also keep the most important pages from the domain in front of the search spiders eyes, while keeping poor content out of the indes. This helps the domain on a site authority level provide valuable content and information to users.
Lower ranking pages, can cause the domain to be pulled down by serarch engines (Google and Bing have attested to this already) as they want businesses to focus on high value content - which leads to better user experience.
Cheers!
-
Oh - and it's affect the domain negatively.. when cleaning up your site directories via robots.txt. Its actually better as I explained below
