Duplicate Content with ADN, DNS and F5 URLs
-
In my duplicate content report, there are URLs showing as duplicate content.
All of the pages work, they do not redirect, and they are used for either IT debugging or as part of a legacy system using a split DNS, QAing the site, etc...
They aren't linked (or at least, shouldn't be) on any pages, and I am not seeing them in Search Results, but Moz is picking them up. Should I be worried about duplicate content here and how should I handle them? They are replicates of the current live site, but have different subdomains.
We are doing clean up before migrating to a new CMS, so I'm not sure it's worth fixing at this point, or if it is even an issue at all. But should I make sure they are in robots or take any action to address these?
Thanks!
-
Hi! I have a couple of ideas, and sent you a quick email to the account on your Moz profile.
You may also find it helpful to do a google search for:
site:ourdomain.com -inurl:www
This will show you all the non-www subdomains that Google has indexed, in case some others have slipped on in and you don't want them to be indexed.
-
Thanks Keri, I received your note!
-
A couple more thoughts here, based on your revised question.
You'll want to figure out how those links to the rogue subdomain have been generated, so you don't just move them over to the new CMS (such as if it's in body text that gets wholesale copied without being examined).
If those old subdomains are not needed at all anymore, I'd get them removed entirely if you can, or at the very least blocked in robots.txt. You can verify each subdomain as its own site in Google Webmaster Tools, then request removal of those subdomains if the content is gone or if it's excluded in robots.txt.
You might suggest to the dev team that they password-protect things like this so they don't get accidentally crawled in the future, use robots.txt to block, etc.
If you have known dev subdomains that are needed, and you know about them as the SEO and make sure they have robots.txt on them, you might want to use a code monitoring service like https://www.polepositionweb.com/roi/codemonitor/ to monitor the contents of the robots.txt file. It will let you know if the file has been changed or removed (good idea for the main site too). I've seen dev sites copied over to live sites, and the robots.txt copied over too, so everything is now blocked on the new live site. I've also seen dev sites with a data refresh from the live site, and the robots.txt from the live site is now on the dev site, and the dev site gets indexed.