It's an account level tag that works across any webpages on any domains it's included on. I've used remarketing pixels on microsites we run on separate domains, and within e-mails we send to target to those users with ads from our main site.
Best posts made by john4math
-
RE: Google Remarketing Tag
-
RE: Why googlebot indexing one page, not the other?
People may have been tweeting about only some of the pages, or linked to some of the pages, or maybe the bot just randomly decided to index some of them and not others?
-
RE: 301 Redirect With A Message And Delay
I believe that is commonly done with the meta refresh tag. From http://www.seomoz.org/learn-seo/redirection:
Meta Refresh
Meta refreshes are a type of redirect that is executed on the page level rather than the server level (They are usually slower and not a recommended SEO technique). They are most commonly associated with a 5 second count down with text "If you are not redirected in 5 seconds, click here". Meta refreshes do pass some link juice but are not recommended as an SEO tactic due to usability and the loss of link juice passed.
-
RE: .us VS .com
I agree with SEOConsult all the way. It's all correlation. In my experience, .us domains tend to be worse (bad content, more ads, less well put together, bad user experience, etc) than .com domains, so I would expect those sites to do worse in the SERPs. It's not the .us domain that's making the sites worse in general. If you put together a terrific .us site, it would do fine.
The main issue I have with .us vs .com and .org.uk vs .co.uk is that people have to remember to enter those extensions. If you have the yourbrand.us domain, a lot of people are going to put yourbrand.com in when trying to get to your site. They may give up there.
Also, I am more likely to trust a .com domain over a .us domain, and I'm more likely to click a .com in the SERPs over a .us domain. Do you have many .us domains you ever visit? I can't think of any off the top of my head. Even del.icio.us migrated to delicious.com.
-
RE: How do you assess PPC ROI?
To be fair in your example, the Adwords click did (on average) have some effect on the ultimate sale. If the customer never came through the door looking for red widgets, they may not have purchased the blue widgets from you? There are a few different reports in Analytics that can help you see this:
- Conversions > Multi-channel funnels > Attribution Modeling Tool (I think this is available to everyone now). If last-click interaction is by far the most important for you, you can select this in the tool and select the primary dimension of Source/Medium. Drill into Google / cpc and you should see the data you're looking for there.
- Conversions > Multi-channel funnels > Top conversion paths. Select primary dimension Source/Medium Path. This will should you the different parts that Adwords played in each conversion. You can also pick a secondary dimension of Adwords Campaign Path to see more info.
-
RE: Display: none
Using display:none is fine. There's JavaScript on the page that will make those items display when the users takes some action on the menu. Google has some way of reading the JavaScript to see what may be displayed, and at this point, I'm sure they're very good at detecting these types of menus. I don't think you can't make a menu like that without using it, and tons of sites use them.
Obviously, you shouldn't use display:none to cloak text on the page, when no user action can get that text to be displayed.
-
RE: Branching out from .com... good idea?
Of course you can. Any domain suffix can rank (unless excluded from Google altogether). Overall, .com's tend to rank the best. Take a look at http://www.seomoz.org/article/search-ranking-factors#metrics.
If you can get an exact keyword match for a suffix like .ly, it might be worthwhile over getting a .com domain that doesn't match as well.
-
RE: Status of Ajax and SEO? Changing navigation from plain HTML to AJAX.
On one site I work on all of the content is loaded with AJAX. I've seen evidence that Google is indexing this text, because if I search for this AJAX content, Google is returning those pages on my site for which it appears on. There aren't many backlinks to these pages, and none that I've found with those phrases as anchor text. So if Google is finding this text out from somewhere else, I can't figure it out.
I'd still rather have our content loaded normally, but I wasn't given a choice, so I'm glad to see it's working. I'd think long and hard before setting up HTML snapshots as given here: https://developers.google.com/webmasters/ajax-crawling/, as doing so would be a considerable amount of work, especially considering how far Google has already come with indexing AJAX content.
-
RE: Best method to measure conversions (Adwords)
If you're tracking by using different phone numbers, you could set a cookie on these landing pages with which phone number is the default to use, and then have the rest of your pages display that phone number instead of your regular phone number. That way users will ever be exposed to one number, and you'll have tracking for those people who came to your site from a landing page.
All of this wouldn't be too difficult to implement. You could do it with jQuery and a jQuery cookie plugin.
-
RE: Google indexing directory folder listing page
If you just want the directory page removed from SERPs, but not the images, you can put a meta noindex tag in the of the page (see here). It'll look like . Provided you link to the images from other pages, you should be in business.
I don't think it would hurt your rankings at all. I would exclude it as you don't want people wandering in to your site on a directory page.
-
RE: Does a mobile site count as duplicate content?
Matt Cutts talks about it here. And here's some info from the Google Webmaster Blog. What he's recommending is to serve your mobile pages to Googlebot Mobile, and your regular pages to Googlebot.
And here a relevant Q&A question with some good answers.
-
RE: Duplicated Content with joomla multi language website
The proper way to handle this is with rel=alternate hreflang tags. This will tell Google the content is the same, but in different languages. See http://support.google.com/webmasters/bin/answer.py?hl=en&answer=189077 for more info. You can place meta tags on each page, or do it in your sitemap.
Other things you can do to help search engines get it right is to set up a profile in Google Webmaster Tools for each of the directories (or at least for the Thai one), and set the geotargeting. For Bing, they prefer you set the country and language on each page (see here).
If you block the pages with robots.txt or use canonical tags, you're telling Google not to include those pages in SERPs. It sounds like you want the Thai pages to appear in Thai results, and the English pages in English SERPs, so I wouldn't do that.
-
RE: Adspert and bid management tools.
It does give a lot of power to Google and their algorithms, but at the same time, if it works, it works! It's hard to argue with more conversion volume at a lower CPA. Historically, it's worked very well for me.
-
RE: Block a sub-domain from being indexed
Each subdomain may have its own robots.txt file. So for that subdomain, you can put:
User-agent: * Disallow: /In the robots.txt, and that should do it.
Please note that disallowing pages in robots.txt will not necessarily mean they won't appear on search result pages.... if people link to pages that are disallowed on that subdomain, they can still appear in SERPs. I had this happen with a few pages, which leads to funny listings in the SERPs because Google has to guess what the page title and description of the page should be, since it's not allowed to read the page. The meta noindex tag is the way to go if you want to be really sure the page doesn't appear in the SERPs. If you use that, don't disallow the page. Here's a recent SEOMoz post about it: http://www.seomoz.org/blog/robot-access-indexation-restriction-techniques-avoiding-conflicts
-
RE: Block an entire subdomain with robots.txt?
Placing canonical tags isn't an option? Detect that the page is being viewed through the subdomain, and if so, write the canonical tag on the page back to the root domain?
Or, just place a canonical tag on every page pointing back to the root domain (so the subdomain and root domain pages would both have them). Apparently, it's ok to have a canonical tag on a page pointing to itself. I haven't tried this, but if Matt Cutts says it's ok...
-
RE: Duplicated Content with joomla multi language website
The Google Webmaster set up sounds right to me!
You should set the rel alternate on all pages that go back and forth, not just the English pages. That way if Google wants to return a Thai page to an English searcher, it'll know to reference the English page. This is the set up Google recommends in their help documentation.
Don't worry about a new sitemap for the /th/ pages. Your current set up should be fine.
-
RE: Top Ad in Google Adwords
This is a great point. You could even implement this simultaneously by adjusting the bid with an Adwords Experiment and not risk any weirdness in the results by running one after the other.
-
RE: If you only want your home page to rank, can you use rel="canonical" on all your other pages?
rel canonical is for letting search engines know that a page on your site has duplicate content of another page. However, I doubt all the pages on your site are duplicates of your home page, so the most likely outcome is that the search engines would ignore your rel canonical tags and just index the pages as normal if they're not similar enough.
This is not really the intent of the tag, and to me it sounds like it falls close to the black-hat side of things. If you really want the rankings from those pages, and don't care about them, you can 301 redirect them to your home page like Joshua suggested.
Here's the SEOMoz post from about a year ago about rel canonical. It also links to some good resources: http://www.seomoz.org/blog/complete-guide-to-rel-canonical-how-to-and-why-not
-
RE: Removing Duplicate Page Content
This is why the canonical tag was invented, to solve duplicate content issues when URL parameters are involved. Set a canonical tag on all these pages to point towards the version of the page you want to appear in search results. As long as the pages are identical, or close to it, the search engines (most likely) will respect the canonical tag, and pass along the duplicate versions link juice to the page you're pointing to.
Here's some info: http://googlewebmastercentral.blogspot.com/2009/02/specify-your-canonical.html. If you Google "canonical tag", you'll find lots more!
-
RE: Should "contact" and "Privacy Policy" pages be no-followed?
I'd just leave those links be. Google knows what contact and privacy policies look like. Nofollows don't conserve more link juice for other pages that page links to, it just prevents it from passing to the pages you nofollow the link to.