Can duplicate content issues be solved with a noindex robot metatag?
-
Hi all
I have a number of duplicate content issues arising from a recent crawl diagnostics report.
Would using a robots meta tag (like below) on the pages I don't necessarily mind not being indexed be an effective way to solve the problem?
Thanks for any / all replies -
Using a noindex meta tag is one way to resolve duplicate content issues. If you take this approach, it is most likely you wish to use only the "noindex" tag and not the "nofollow" tag. You don't want to prevent Google from following the links on the page, but instead simply stop the content from being viewed as duplicate.
If you wish to explicitly include the "follow" you can but it is unnecessary since it is the default setting.
-
Yes it
would, but i would rather use the canonical tag, all pages have pagerank and
even weak pages help you site rank better. Google once released their page
rank, since then they have changed it many times, but from testing we know that
the main idea still holds true. Pages not in the index can not add to your
sites pagerank.Take a
look at this page it explains it very well. http://www.webworkshop.net/pagerank.htmlUse the calculator,
it is very intuitive -
This is an old question... And the answer is yes In fact a page blocked in a robots.txt can be reindexed if that same page is linked in an external site. check this old webmaster help thread > http://www.google.com/support/forum/p/Webmasters/thread?tid=3747447eb512f886&hl=en That is why is always better use the meta robots no index to be really sure we don't want a page to be indexed
-
Thanx!