Hi Bossandy.
I don't want to list off names here publicly but happy to recommend a couple of the copywriters that we frequently hire for client work if you'd like to PM me.
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi Bossandy.
I don't want to list off names here publicly but happy to recommend a couple of the copywriters that we frequently hire for client work if you'd like to PM me.
Hi Bhanu,
Domisol is correct. NoFollow is talked about more frequently, but I'm referring to what is called NoIndex. Using NoIndex tells Google not to show that page in the search results. In contrast, NoFollow tells Google not to pass value through all or some of the links on a page.
I also agree with Domisol that NoFollow should not be used along with NoIndex in this situation. Allowing the link value to flow freely throughout the site and selectively choosing to remove the tag pages is a better solution. Doing this will help index more pages with more obscure tags, too.
To NoIndex your tag pages, go to the Indexation page under the SEO options. Under Indexation Rules, select "Subpages of Archives and taxonomies," "Tag Archives," and "Date-based Archives." If you're not doing anything special with the Author pages or only have one author, go ahead and select that, too. I would prefer to see you add unique content to the Category page before you NoIndex it, but that one could arguably go both ways.
Hmmm - are you referring to a list of keywords that are relevant to each of their clients? EG if they're working with a pet food company, they're tracking "Dog food", "cat food", etc.?
Or are you referring to agencies that track an index of general competitive keywords to see how SERPs are changing over time? EG "credit card consolidation", "payday loans", "Women's heels", etc.?
"Do as we say, not as we do..."
No site is perfect, and Google is far from it. But kudos to them for testing much of the time and backing off of poor designs, like they did a few weeks ago with the extremely flawed nav menu they were testing: http://searchengineland.com/google-moves-away-from-large-navigation-drop-down-menu-111057
There are 2 schools of thought here:
The pure old school SEO approach is to leave all of that content on the site so long as it meets some basic content quality requirements like length, non-duplication, etc. If those posts have a good number of inbound links (more than 5-10% of your link profile), then there's an even stronger argument to leave them there, and just publish some new content so that they go deeper into your archives.
I think that's a perfectly legitimate approach in this case, unless you're getting tons of annoying contact form submissions from people wanting help with problems you don't want to be involved with. That would be a good argument for removing these posts.
The second angle is the editorial content purist approach. This approach would say "traffic and links be damned, if it doesn't convey the intended message to your target audience, kill it." If your content was really off topic - like a gardening tutorial, I would recommend this route.
However - your content is related to IT and programming from what I can see - eg a post about Accessing Networking Settings in Windows XP. In my opinion, that type of content is still compelling to a small business owner who might hire you, because it's further proof that you're technically savvy. From that standpoint, I think you could make a good argument for leaving that content in place.
The final point is whether Google thinks your site is about SEO/web design, or about IT support. This is a legitimate concern. I would address it by simply adding a lot of new blog posts over the course of the year, entirely dedicated to web design and marketing. It's possible to "retrain" Google's understanding of what your site is about by doing this consistently enough. Here's a great post that I think would be a good tactic for you to pursue while retraining Google to understand your site's new purpose: http://www.anumhussain.com/presentations/topics-over-keywords.
Sounds like they're doing about as well as any affiliate business model can do in regards to creating valuable content and value for site visitors.
I wouldn't worry about linking to the other site - if that's the only way to make money than there's not much choice in the matter.
Google isn't going to "see it as leaving one site for another" - if anything it's a good thing that site visitors went to that site, and found a link that led to whatever they were looking for. Google can't draw any conclusions beyond that.
If you're worried about passing link value, find a way to 302 redirect the link.
"apart from main page, contact page, about us and some other generic pages, site name should be removed as it might produce duplicate content."
As a blanket statement, this is misleading at best.
Default title tags for most websites are "Page Title - Site Name", or something similar.
I generally like to keep the brand name in the title tag because I'm usually working on sites where the brand is valuable. For instance, "Seattle Lodging - Airbnb" is more compelling than "Seattle Lodging".
The main reason I usually get rid of the brand in the title tag is when I'm trying to create a high-clickthrough-rate title tag and the brand portion is getting in the way. For example "Seattle Lodging & Vacation Rentals - Nightly Rates Starting from $79!" might be a really compelling title tag and I can't fit the brand name on there, so I'll drop it. BUT I won't drop it elsewhere on the site where I want a solid default title tag format.
The only caveat is that if your brand sucks (eg DavidsCheapAffiliateReviews.info) then it might actually detract from your click-through rates, and you might want to leave it out.
You say you're 4th - are any of the SERPs specifically local/map listings, or is it a normal SERP with no map?
Is it the type of industry where dedicating a page to the region/location would be valuable?
Let me try and tackle these one by one:
Capitalization:
Yes - typically you can handle this with a server-level redirect that forces all URLs to be lower case. Google will see the upper case version as a separate page, and that's why it's getting flagged as duplicate content. Be sure to double check that this is being process as a 301 and not a 302. Wheregoes.com is an easy quick way to check.
5 Sleep Options:
I'm not sure why, but http://www.bellybelly.com.au/5-sleep-options-for-your-baby-where-will-your-baby-sleep is producing a list of articles almost identical to the ones being shown at /birth. So, that's the cause for the duplicate content here. I'm guessing it's a CMS error of some kind.
f196 vs f193:
These are showing as duplicate because there's nothing unique on the page except for Queensland vs Victoria. You'll need to find a way to highlight threads that are about Queensland or about Victoria to make this page useful, or you can simply "noindex, follow" these pages so that the duplicate content is irrelevant.
/tas/doulas vs tas/etc., etc.
These are duplicate content because they all say "No advertisers found matching your criteria." That's it. They need to have some real actual content, otherwise there's no reason for them to exist. For these directory listings, try creating a bunch of free listings to serve as temporary filler and it will also encourage other competitors that they should consider doing a listing as well.
So, I'm going to throw out some ideas for contest promotion that should apply broadly to what you're doing... First step is to figure out audience so you know where to target. There are a billion mommy blogs that will promote contests but if it's not the right audience for your contest than it's a waste of time.
Figuring out the audience will dictate where you go to promote the contest. I'd start out simple - Google "literary contests" and see what comes up - you might get a few places you can email and tell about the contest that would be willing to write about it, or share with their social media pages and newsletter. Then move on to similar queries like "list of literary contests", "writing contests", etc.
Your question mentions doing keyword research for kwd volume. I'm not sure how that would apply to contest promotion? Overall I think you have the right idea in manually doing research into different outlets.
A few other posts to help you out:
Can you track the dropoff to within a week of any significant site changes?
Did you noindex something like category or tag pages on a blog?
Did you add rel canonical to sections of your site?
I just scanned through 20-30 sites of various traffic/content models and I don't see anything major on any of them, but that doesn't mean you're not seeing something specific happening.
If I were you I would check into Bing Webmaster Tools data and see what they can tell you about recent changes to crawling or visibility to see if there's a signal there that you can look into further.
It's extra content for your site, but it's also duplicate content (and I guarantee that Google found the Twitter version before they find the version on your site.
Does it add any value for visitors to your site?
It sounds like you should be able to go into GWT and go to Configuration > URL Parameters and tell Google which URL parameters you don't want to be crawled.
They have a quick guide to doing this correctly at https://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687. Definitely read through that before you do anything, to make sure you don't accidentally deindex more than you planned on...
All sites have a certain number of links without anchor text, so these will reaffirm a natural backlink profile to your site.
Does it add value the same way "rental car service" as anchor text would? No, it won't.
If rental car service is mentioned elsewhere on the page, though, Google can recognize that using co-citation. If you're getting a link from a directory or somewhere it's likely that Rental Car Service is mentioned on the page as your company name. Rand has a post about optimizing biography links that touches on how this is valuable.
But, if you really have the exact match domain with hyphens then you don't need the anchor text quite as badly, and you should do alright with some decent link building.
You'll want to use Rel="publisher" for this situation, pointing back towards the client's Google+ business page.
Install it: http://www.thoughtsfromgeeks.com/resources/2792-Rel-Publisher-meta-tag-Syntax-Use.aspx
Then test it with the snippet tester: http://www.google.com/webmasters/tools/richsnippets
Won't necessarily have the same effect as rel="author" on snippets at this point, but it's the best analogue if you need to use a business page.
I'm assuming that she serves clients outside of her locations if you're concerned about store radius? I would figure out the actual area that she is willing to serve clients, and divide it between the two locations. Each location will then have it's own Google Places listing, serving a different radius.
That answer feels a bit to easy - maybe I'm missing a nuance with your question?
Google passes link value through the 301 redirect to the destination page. Since there is no longer a page with content where there used to be, they do not take into consideration the former content of the page.
I didn't mean to post the URL, I typically wouldn't on here. You could always post a bit.ly link and remove it or something like that as well.
Keep in mind that OSE has a couple week delay or longer on showing new links, so you're looking at links built in December or earlier. If they really are actively building links right now, there will be a delay in seeing those.
In my experience 301s will currently mess up your Google +1 Counts, Facebook Like count, Tweets, LinkedIn shares. Typically both myself and clients have learned about this the hard way, after the URL is changed and 301 is redirected.
I have tried to research whether the effect is passed along but have not seen a conclusive answer. This Google Webmaster Central thread from August 2011 has more information on the subject and is worth reading:
How does Google treat +1 against robots.txt, meta noindex or redirected URL:
There is a response from a Google Employee that states:
"Hi,
The +1 Button interacts with robots.txt and other crawler directives in an interesting way. Since +1's can only be applied to public pages, we may visit your page at the time the +1 Button is clicked to verify that it is indeed public. This check ignores crawler directives. This does not, however, impact the behavior of Google web search crawlers and how they interact with your robots.txt file.
The other issue you mention, of redirecting pages, can be resolved using link rel="canonical" elements on your pages.
Cheers,
Jenny"
Her answer seems to point towards some sort of "credit" getting passed along when you use rel="canonical", but that's just my interpretation of what she says.
If you're targeting one location, you can probably work with the homepage.
More than one location? I'd use specific pages for each location.
Regarding too many pages - I'd argue that you only have too many if they have duplicate content. If each geo-targeted page has content that is unique to that city or area, I think you're in the clear. That said, if it's implemented in a way that looks spammy, then you have too many...