Does any one have experience with SEO and .NET using 301 redirects?
-
A while ago I altered some of the URL's of my website. Google now thinks that I have two duplicate pages (duplicate content), I have asked my third party web developers (Who use .NET and a custom built CMS system) to simply 301 redirect the old URL to the other.
However, my web developers say the following:
"Solving the problems by 301 permanent re directs are out of the question as this would create infinite loops. Likely to bring down our server."
They also wont do a canonical, as they say there is only one page (but two URLs)
Firstly, has any one heard of this before and do they think this is true?
Also, does anyone have an alternative method of getting rid of the old URL?
Any thoughts would be much appreciated.
-
Hi Thomas
At the risk of being blunt, they're having you on.
Having used the .net system and a lot of CMS', while I know the inifinite loop problem does exist, it is not a "stock" problem with .net or any CMS I've encountered. If the 301s are returning a loop, it;s likely the dev team's implementation of the CMS and not the .net framework to blame. From this point of view (which isn't the whole story of course), it's their job to solve it.
In addition, adding a canonical to one URL would help the problem. If only one page exists, adding a canonical tag to the one page is a strong directive to Google to index only that version of the URL. That means, eventually, Google would stop indexing the other URL flagging the duplicate content. So this would solve the issue.
That's presuming that the other URL that's creating duplicate content is a variant of the page, like a query string (www.domain.com/example and www.domain.com/example?query). If it was a completely different URL (www.domain.com/anotherexample) and it is showing the same content, but not redirecting, then there would be 2 pages in existence. Again, either a redirect or canonical will help, although you could also get the devs to add <noindex,nofollow>to the meta tag of the page.</noindex,nofollow>
If all else fails, you could use your robots.txt file to block any crawler from reaching the URL.
Here's a handy guide that helps explain all these options.
Hope this helps.