Duplicate Content Issues
-
We have some "?src=" tag in some URL's which are treated as duplicate content in the crawl diagnostics errors?
For example, xyz.com?src=abc and xyz.com?src=def are considered to be duplicate content url's. My objective is to make my campaign free of these crawl errors.
First of all i would like to know why these url's are considered to have duplicate content. And what's the best solution to get rid of this?
-
Do the two URLs contain the same content on them? Google (and other crawlers) will often treat URLs with query parameters (anything after the question mark) as separate URLs. If that's the case, make sure there is a canonical tag in place in the header, to let Google know what the proper URL for the content should be.
-
I'm guessing you are using the ?* stuff as a tracking string or site-wide search querying so the original page should have the rel=canonical attribute added. This will tell Google which page has the original content and will help your website avoid the penalty.