Yeah... beyond what I've already said, I don't really know what to do. A sort of similar question is here, and Thomas makes some good points. Adding some of the new schema.org markup could help (they have it for products, reviews, blogs, and more).
Posts made by john4math
-
RE: Panda Update: Need your expertise...
-
RE: Legit Domain Masking
You can have a page canonical to itself, so if you just add the canonical tags to all the pages, they'll appear on all of masked domains as well.
Thomas is right; if your goal is to get any of the vanity domains on search result pages, the canonical tags will prevent that. A better set up might be to have the home pages be different (with the reps name and information), and then have the rest of the pages on the site be duplicate content with canonical tags. To say the sites do not exist isn't really accurate... it just means you haven't set them up yet.

-
RE: Paid search impact on Organic Search traffic - Is it in the positive side or negative side?
People will see your ads, and then go to your site, or search for it to find more information. In Adwords, for display campaigns, Google will show you "view-through conversions", which occur when someone sees your ad and didn't click it, but went to your site and converted. This happens (at least for us) more often that I would have expected. For some of our campaigns, almost 1/3 of our conversions are view-through conversions. Google doesn't track this for search ads, but yes, just having your name out there on ads will get people to look at your site, or remember it and search for it later.
It's not getting a boost from the search engine just because you buy their ads... it's because people will remember seeing your ads and then go and search for your site.
-
RE: Legit Domain Masking
That's fine, just rel=canonical all the pages under each salesrepresentative.com to the corresponding company.com pages. That tells Google that it's duplicate content, and will consolidate all the rankings for those pages under the company.com domain.
-
RE: How do I index these parameter generated pages?
It sounds to me like you need a sitemap? If you want to create one dynamically, here's a thread with some info.
You should also set it up in Google Webmaster Tools and Bing Webmaster Tools and set them explicitly to not ignore those product parameters in the URL.
-
RE: Panda Update: Need your expertise...
I looked a bit more. If I had to guess why you're being penalized it's because your site doesn't have much in the way of original content. For example, go to a product page, like this one for a Brother ribbon cartridge. The description reads "Brother Black Ribbon Cartridge. This brand new Brother 1230 black ribbon cartridge is manufactured by Original Equipment Manufacturer (OEM). 100% Manufacturer Guaranteed." If you stick that in Google, you'll see that you, along with every other printer site uses basically the same description, like printert.com, 123cheapink.com, inkcircus.com, etc.
You need to provide original content on your site that doesn't appear as duplicate content to Google. The blog is a good start. Having customer reviews, testimonials, and articles custom to your site are also good.
You might try taking a look at your competitors that are doing well in searches, and see what they're doing in terms of original content on their sites.
-
RE: Panda Update: Need your expertise...
Usually, what's good for the user is good for Google. I'm a little confused by how I'm supposed to navigate your site. There are 3 ways on the page to navigate your site, on the left, top, and right? There seems to be a lot of redundancy between these. You could probably consolidate these into one form of navigation?
Also, the links to categories within the expanded top navigation are wrapped in
s. For example, if you expand Ink & Toner, Brother, Canon, Dell, etc are all in
s. This is resulting in a lot of
tags on your page. I'd remove those tags since those elements aren't headers. It looks spammy to me.
-
RE: Why would my keywords never ranking in Bing but have great position in both Google and Yahoo?
Have you installed Bing Webmaster Tools on your site and looked in there? There you can see how many pages Bing has crawled and indexed, and submit your sitemap. You can also submit individual URLs for Bing to index (10/day, up to 50/mo) which should help Bing find your site if it hasn't already.
-
RE: No index, follow vs. canonical url
Here's a similar Q&A post: http://www.seomoz.org/q/canonical-pagination-content. The answer there suggests adding a view all link, and then setting rel=canonical to all of your paginated search result pages to the view all page. They also suggest this on the search engine roundtable blog here. A good point is that since these pages have different search results, if you try to rel=canonical these pages, there's a good chance it'll be ignored.
-
RE: New website with slightly new urls
You need to 301 redirect all these old URLs to their new counterparts. If you just block them, people who have linked to these pages across the web will be getting 404 pages, so you'll be losing potential customers. Also, you won't be passing the Pagerank along.
-
RE: Understanding Canocalization, domain structure, redirects
To find out if your site is splitting domains, try going to different pages and seeing if they're served. For example, go to http://example.com, http://www.example.com, http://example.com/, http://www.example.com/. These should all end up with the same URL at the top when the page loads. If it doesn't, that means your server needs to be configured to do these canonical redirects. Depending on what software you use, this can vary, but the Q&A forum has had a lot of answers related to configuring redirects, so look there, or try Googling it.
I'm not sure it'll make a huge different if you decide to use example.com or www.example.com, but whichever you do choose, be consistent throughout your site.
If you really want to learn to get into the nitty-gritty of web pages, the main tool I use to view web requests, http headers, source, etc is Firebug, a Firefox Add-on. Chrome has something similar, which I think came pre-installed (if you can right click something, and choose "Inspect element", you have it). To learn it, play around with it a bit. The HTML tab in Firebug (Elements tab in Chrome) will show you the source of where you're selecting in the page. The other tab it sounds like you'll be interested in is the Net tab (Network tab in Chrome). Here you can see all the different files the page is loading, and you can view their statuses, headers, and responses.
Another tool I use frequently in Firefox & Chrome is the Web Developer Toolbar. It lets you enable/disable a lot of different things (caching, images, CSS, JavaScript), and clear your cache with a few clicks.
-
RE: Does the Facebook Share button still exist?
Like and share are different. Sharing will just give you the option to post something on your wall, but you won't "like" it. Here's an article which should give you some info about implementing share buttons. From what I've read, it looks like Facebook will be phasing out shares for likes at some point, so who knows how long share will be around. Share is a little outdated, since when you like something, you also have the opportunity to share it on your wall. If you're going to share something on your wall, don't you like it?
Huffington Post allows you to do both on their articles. For example, see here.
In response to your reply on Nicholas' answer, I think there is some control by site on how the additional text you can write on your wall when you "like" something. Maybe there is some psychology involved, where if you click "share", you're more prone to adding a comment about what you're sharing, and with "like", you're less prone to adding text when you post it to your wall.
-
RE: Is this the best way to get rid of low quality content?
rel=canonical is more for when there are parameters on your URLs that you can't really do anything about. When you know one URL is being served, but should be another, you should use a 301 redirect. So in this case, you should pick which URL you like better, either with or without the trailing slash, and redirect one to the other. Google treats both of these as two completely separate pages, which is why you're seeing views on one and not the other. If you can't configure the redirect, then you could resort to rel=canonical.
If you have pages with similar content but not a lot of views, then 301 redirecting that page to another page with more views would be fine. That'll pass it's pagerank along, and good for people who find that original URL later, because they'll go to an actual page instead of your 404 page.
-
RE: Differents TLDs and same contents not a problem Matt Cutts says?
-
RE: Robots.txt: excluding URL
You don't want to do this in robots.txt. If you serve pages with these parameters, people will inevitably link to them, and even if they're disallowed in your robots.txt file, Google maybe still index them, according to this: "While Google won't crawl or index the content of pages blocked by robots.txt, we may still index the URLs if we find them on other pages on the web."
This is what the rel=canonical tag is designed for. You should use that to tell Google the page is duplicate content of another page on your site, and that it should refer to that other page. You can read (and watch a video) about that here.
-
RE: Can Search Engines Read "incorrect" urls?
A few other things to note for having parameters in URLs:
- In Google Webmaster Tools and Bing Webmaster Tools, you can instruct the search engines to ignore certain parameters, so that they'll treat domain.com/topic?keyword and domain.com/topic as the same page (if ?keyword doesn't change the page content)
- You can also place the rel=canonical element on pages. So you could set domain.com/topic?keyword to rel canonical to domain.com/topic to pass its pagerank along.
-
RE: Does anyone have any tips for SEO in WebSphere Portal with Lotus WCM?
The NCAA site does seem to be pretty well put together. Like you said, their TITLE changes from page to page, which you need for decent SEO. Also, once you figure that out, you should also be changing your META description tag within the HEAD on each page on your site, so it describes that page you're on. In the meantime, I would recommend removing your meta description altogether from all of your pages if you can't set it on a page by page basis. That way, the search engines will try to pick out the most relevant text from your pages (and relating to the user's search query) instead of always using "Auto Insurance from Universal Insurance Group Trusted. Fast. Service." (also you're missing some punctuation here between Group and Trusted).
Working within this platform, one thing which the NCAA site is doing well,which I think you could try to do is to get that nasty session parameter out of your URLs. I think you only need that if you're tracking the user's session? The NCAA site doesn't include it until I click the member login link. Since the Googlebot won't be signing in, if you can manage to get rid the these parameters when people aren't signed in, that would improve the SEO of your URLs.
The more of these random directories you can get out of the URLs, the better. So it looks like the way it's configured right now, is the URL doesn't change, except for the WCM_GLOBAL_CONTEXT parameter at the end, which is setting the content of the page? The NCAA site is getting the WCM_GLOBAL_CONTEXT parameter to appear at the end of the URL, like it should be, rather than as a parameter. That is much more natural, and I would imagine Google rather see keywords in the URLs rather than in parameters in URLs.
-
RE: Does anyone have any tips for SEO in WebSphere Portal with Lotus WCM?
Let me preface this by saying I don't have any experience with WebSphere Portal with Lotus WCM. If you could describe more about what the platform outputs, or know of a site using it, I could take a look.
Non-pretty URLs aren't great, but they're not the worst either. If you're stuck with 'em, you're stuck with 'em. According to the SEOMOZ 2011 rankings report, having keywords in the URLs are still pretty important (69.9/100).
One way to work around the non-canonicalization of the URLs is including rel=canonical tags to all of your pages. That way, when Google comes across a non-normalized URL, you'll have the canonical tag to tell it the right page to pass the Pagerank to. Provided you always know what the normalized URL is supposed to be for non-normalized pages, this makes it easy to always have the Pagerank going to the right page. Even if the normalized version rel=canonicals to itself, Matt Cutts gives that the aok (video here). If you have a procedure that's working to normalize URLs, that's best, since we never know what's going to happen next.
More sources about rel=canonical: Google Webmaster Tools help and Webmaster Central Blog.
-
RE: SEO advice when making a mobile site
Yeah... you'd probably want to redirect those offline advertising URLs depending on the device being used. Unless your site is primarily mobile, you'll probably want to distribute your regular URLs in offline advertising, not URLs with mobile.example.com.
For usability, you should have links to go between the two version of the site. What would be even better is if you did this: set a cookie when you change the version of the site you're on, to prevent the redirect in the future. So if you change from the default version of the site from the version that we think you would want for your device, and come back to that site later, we'll use the version you selected.
For example, if a user prefers the desktop version on their iPad, the first time they go to your site it'll take them to the mobile version. Once they click the link for the desktop version, they'll always receive pages from the desktop version of your site (for 1 year unless you renew the cookie or they clear their cookies). This will work even if they click a new link to your site from search result pages.