Omniture SiteCatalyst and they want to start tracking every campaign
-
Hi All, I need some help on this, I have a new client who uses Omniture SiteCatalyst and they want to start tracking every campaign the do from Facebook to Blog Posts
The client, wants to track everything do, to work out the value of each sector, which is great, but i need help on the following effects this could have on my SEO campaign
the examples that shown are as follows, they are the tracking code links:
http://www.coolflower.com?id=blogcomment...............from a blog post
http://www.coolflower.com?id2=facebook.................from a Facebook post
The issue here, will this have an effect on my SEO rankings, due to the site replicating the Content. I was thinking of using Canonical link for these, but im not 100% sure. i am also not 100% sure how SiteCatalyst will work, so i am open to as much advise as possible. I found this article
http://blogs.adobe.com/digitalmarketing/analytics/campaign-tracking-inside-omniture-sitecatalyst/
which explains the process in more detail.
-
Hi Nicholas
I'm facing the exact same problem and decided to resolve it with rel=canonical. Particularly when having tracking codes in URLs that get picked up by googlebot, this leads to duplicate content.
Alternatively, you could also disallow all these tracking parameters in robots.txt (e.g. Disallow: /*trackingID=) but rel=canonical is the better solution IMO.
Also, if you do chose to block via robots.txt, make sure to use another tracking ID than just "id=" to make it more unique an prevent involuntariliy blocking URLs that happen to contain these letters outside a tracking ID.Hop this helps!
/Phil -
Thanks so Much Phil, I agree with you did a bit of digging and got some really good feedback from these sources
The exact issue from Moz
http://moz.com/community/q/omniture-tracking-code-urls-creating-duplicate-contenthttp://www.dotcult.com/omniture-tracking-codes-and-seo
I have decided to follow these three routs:
1. Use Cononicals on unique URLs that are being tagged2. Tell google webmaster tools to exclude /article?sessionidfacebook so it wont reg that the pages are different.
3. Will then follow this up with Robots.txt exclude just to be safe ``` I dont know if this is over kill, also if i used these steps will this have an effect on Analytics? -
Cool, Nicholas, thanks for the resources!
Yeah, I think your 3 routes might be a bit of overkill, though I don't think they hurt you. Though I would not block them in robots.txt since this only prevents crawling but not indexing of these URLs.
So my advice is to go with step 1 & 2.