Nah. Ditch em. Article directory links are pretty garbage these days. Any miniscule benefit they may still provide isnt worth the headache of wondering if those are the links keeping ya down. Also if you ever have to do a reconsideration request it'll be one less thing for you to clean up because under a manual review there isnt google engineer out there who isnt going to frown on them links.
Posts made by KrisRoadruck
-
RE: Will cleaning up old pr articles help serps?
-
RE: Number of occurances of a keyword
Keyword density is irrelevant unless you are just spamming the crap out of the page with it. Assuming you are writing naturally I'd have to guess its a really long page to get your partial keyword in there 60 times. If you aren't writing naturally, fix it of course. If however its not a big block of text we are talking about here but rather an ecommerce page with a ton of product listings that just happen to contain the word (for example its a page listing different types of boots and thusly the word boot appears a bunch of times) I think google is smart enough to figure out whats going on there and not ding you for it.
Hope that helps.
-
RE: Will cleaning up old pr articles help serps?
Article directories have been looked down on by google for quite some time now (basically since panda, long before penguin). Couple that with them coming down on overly aggressive money anchor linking and the issue you have with semi-duped content there (Im guessing you spun them so you could take the same article and shop it to all 10 directories each time you did this) and yeah thats a bad recipe. I'd definitely delete the articles if you still have access to all the accounts. 50 unique * 10 directories Im guessing 500 crap links in total? Ditch em and then work on getting some really awesome replacements (you wont need nearly as many). High quality guest blogging is the new article submission. See if you can't get some guest author accounts at a few tightly related sites from your industry or aim high and see about getting guest author accounts on general purpose but super authoritative sites (few examples that allow this, national geographic, cracked.com, washington times, guardian.co.uk, investopedia, you get the idea here...) dumping those old links and adding in just even 20-30 great ones is going to be a huge lift for you.
-
RE: Advisable to pay for a link from a highly reputed site in same domain?
One thing to keep in mind when doling out this kind of advice is that time isnt free. If the cost to do a one off payment on a link is far less than the hourly cost * the number of hours it takes to develop great content, make a new friend and of course the fail rate on that sort of outreach (which is super duper high if you've spent any time doing it) then saying it'll save money is kind of actually not true at all. Not advocating buying links perse but yeah time isnt free.
-
RE: Advice on buying a domain name for a valuable link
I do this pretty regularly so hopefully this will be of help. If you decide to go this route there is a ton of diligence that needs to be done. First keep in mind that post-penguin there are a ton of people dumping essentially burned domains. When you are digging through your drop lists first start off by making sure the domain is still indexed by google at all. Next take your list and dump it in a google doc and use the moz api (tutorial here: http://www.seomoz.org/ugc/updated-tool-seomoz-api-data-for-google-docs) to pull up the following stats on your list:
Page Authority
Domain Authority
Domain Trust
Root Linking Domains to Root Domainpage authority is really not overly important at this stage, you are just using it to figure out if they were WWW or non-WWW for when you do your site rebuild (more on that later)
For the other ones dont touch anything with a DA below 40, a DT below 4 and RLDTDs below say 150.
once you have your remaining list (it'll be a lot smaller than the one you started with) you now have to basically check SEOmoz's work. A lot of times a domain that has been spammed all to hell will still have semi-decent numbers with moz. They just aren't that good at determining spammy links just yet. Look for really high money term anchor counts (bad) really high link to root linking domain accounts (means lots of sitewides, also bad) and things of that nature.
Once you've chucked out all the domains that are really ugly take your now very very small list and go to archive.org. You are looking to see how long the domain existed in its most recently current state. Does it look like it changed hands a bunch? Or has it been pretty much the same for years. Does archive.org have most of its site archived? You'll need that for site reconstruction. As others have mentioned you basically want to restore it to its previous state as closely as you can. At least to start out with. If all that checks out then you'll want to look at the whois info. You are going to want to register it with the same registrar it was last time. Match their whois data as best as you can. Make it look like they just forgot to renew for a while but then fixed it. Once the whole site (or as much of it as you can) is restored, just let that sucker camp for a few weeks. Let google get used to it being back. Make sure they don't pull a pagerank reset on it for the drop (if it had some and it suddenly drops to zero, you might as well toss the thing out. It means google knows it changed hands). If all looks good after a few weeks? Slip your link in wherever it makes sense. A new page linked to from the home page would be ideal. Adding a new link to an old content page is in most cases a pretty glaring sign of link manipulation.
Hope that helps

P.S. in the version of the google doc included in that tutorial it doesnt have DT or RLDTRDs build in. You'll need to mod the code slightly if you want to pull those stats. Here is the bit you need to change
add this:
"ptrp" : "domain trust" , // 524288
"pid" : "root domain links" // 8192under this row:
"upa" : "page authority" , // 34359738368
and then change this row from this:
var SEOMOZ_ALL_METRICS = 103616137253; // All the free metrics
to this
var SEOMOZ_ALL_METRICS = 103616669733; // All the free metrics
now in the main sheet just add 2 columns. One with the heading domain trust, the other with root domain links and be sure to change the yellow box formula to include the extra 2 columns during its fetch. Cheers

-
RE: How important are internal pages to overall site rank?
Important to have a hero page (or 2) otherwise you risk keyword cannibalization. The short and dirty way to handle this of course is to make sure that any page that is basically on topic with your hero page but isn't the hero page links back to the hero page with the appropriate anchor somewhere in the editorial body. There are other more elegant ways to handle this but this way will get you into less trouble than, for example, rel-canoning all those topically relevant non-hero pages to the hero page (bad idea dont do it).
-
RE: How do you track new inbound links?
Not sure on OSE but in both AHREFS and Majestic there is a "first discovered" date on links. Perhaps this should go in the feature request bucket for the OSE team?
-
RE: What Are the Best Practices for Ranking for Synonyms?
The best bet would be to use the words semi-interchangeably on that page (don't over do it) and then just grab some links with anchor texts of the various synonyms. Single page is going to be the stronger bet.
-
RE: Why has SEOmoz added G+ code to multiple pages?
I doubt a single string is going to add too much overhead. Im not sure how they have designed their CMS but it may just be a case of having it everywhere is easier to implement than page specific. The other thing may just be making sure google has an easy time associating sub-page content with the author/publisher without having to look up the main page. Im simply guessing here though, Im sure a mozzer will come along and give you a more definitive answer once they see your question.
-
RE: Why has SEOmoz added G+ code to multiple pages?
can't speak to the moz staff but meta tags aren't treated as links. neither are rel tags within an <a>tag. So no risk to "leaking link juice from ever page". :-)</a>
-
RE: How to compete with duplicate content in post panda world?
Not a complete answer but instead of rel-canonicaling your dynamic pages you may just want to robot.txt block them somthing like:
Disallow: /*?
this will prevent google from crawling any version of the page that includes the ? in the URL. Cannonical is a suggetion whereas robots is more of a command.
as you can see from this query:
Google has indexed 132 versions of that single page rather than follow your rel=canonical suggestion.
To further enforce this you may be able to use a fancy bit of php code to detect if the url is dynamic and do a
robots noindex, noarchive on only the dynamic renderings of the page.
This could be done like this:
I also believe there are some filtering tools for this right within webmaster tools. Worth a peek if your site is registered.
Additionally where you are redirecting non-www subpages to the home page you may instead want to redirect them to their www versions.
this can be done in htaccess like this:
Redirect non-www to www: RewriteEngine On RewriteBase / RewriteCond %{HTTP_HOST} ^yourdomain.com [NC] RewriteRule ^(.*)$ http://www.yourdomain.com/$1 [L,R=301]
This will likely provide both a better user experience as well as a better solution in googles eyes.
I'm sure some other folks will come in with some other great suggestions for you as well

-
RE: Why am I not in Top 50?
Much as people hate to hear it, On Page plays a very minor roll in what ranks and what doesnt.
Your page itself isnt a particularly strong page (only 1 inbound link pointing to it) but more importantly your domain itself is just not there.
Take a look at these 2 links:
The first is a comparison between your page and the page ranking in the lowest spot on page 1 for the phrase:
This second one is a broad look at those in the top 10 for that phrase:
Pay special attention the PA/DA scores, The root linking domains to both the page and the domain itself.
Right now it looks like your entire site has about 25 unique root linking domains. Thats something you'll want to work on. Try and get the best links you can of course but any links are going to make a big difference for you right now. Spread em around. A bunch to the home page, a couple to this specific page. This is going to be a major factor on when and how high you rank. Good luck!
-
RE: Viral page not ranking onGoogle
Mind shooting over the Query and URL in question? I'd love to take a peek directly.
-
RE: How do I check if my IP is blocked?
well the first thing Im seeing right off the bat is google seems to prefer the address you just gave me however that redirects to a subdomain reflecting your desired keyword. When did you do this? I would
A) make sure that you are redirecting properly
B) if its only been a short while give google time to figure out whats going on here
C) make sure that any links that previously pointed to the root domain be switched to your preferred address.
Edit: Take a look at this...
-
RE: Why does google not show my ecommerce category page when I have the same keywords for many products in the product title?
A great way to get around this would be to apply a different title within the category page linking to the product page than the one displayed after hitting the product page. People at the category level likely know they are in the mens shirt section so doing brand + color + style (long, short, sweater et cetera) and sizing options at that level for the title and then when the user gets to the actual product page having your H1 and title tags reflecting the full string including "mens shirt" may be an ideal way to do this. Obviously this may take some adjustments in your CMS but if what you are seeing seems definitive enough this is a great middle ground and likely worth the coding effort to add an extra entry field for "category page product title" You may even be able to automate it by simply having your system drop the category text from product titles within a category page.
-
RE: Seo is dead?
I think the "SEO is Dead" statement comes from a fundamental lack of understanding of what SEO actually is. While we search marketers tend to clump a lot of extra marketing tasks into the SEO bucket, SEO in its simplest form means exactly what it says.. search engine optimization. Search engines will always exist in one form or another. Even using the "FIND" function in twitter is tapping into a search engine. As long as there are people searching for things and thus information retrevial, optimizing that will always be a necessity. People tend to confuse short term tactics with long term definitions. Just because article marketing or link building or digg may no longer be the way to optimize something for search, or even if google and bing cease being the preferred search portals it in no way means SEO will go away, just the strategies and tactics and applications will change.
-
RE: How do I check if my IP is blocked?
IP based bans are really rare within google. If you can link to your site that'd be great. Im wondering if there might not be a robots.txt problem or something else that may have arrived from the move as apposed to an IP based block.
-
RE: Brand SERP Domination
You should be able to control positions 1-3 with just your primary domain. Work on pushing up some of your sub-pages for your brand term.
Once thats done consider tossing a blog or something else (presskit, video content, employee directory) on a subdomain. That should net you another 2 spots on page one with your domain with a little effort.
Once thats done, Facebook, Twitter, Google+, Linkedin and Youtube can take the other 5 spots.
-
RE: Viral page not ranking onGoogle
If it just went viral yesterday you might want to give it a few days to see ranking changes. Also remember that google still primarily relies heavily on links. Traffic != links. The other pages you mentioned may be more heavily direct-linked to than your page. My recommendation would be to hit up those people discussing your content and make sure they are placing a do follow link back to your content, preferably with an anchor text that either directly or partially matches the subject you are hoping to rank for.
-
RE: Can you recommend a few resources for Yahoo SEO
You'll want to focus on Bing rather than yahoo. Yahoo has its search results provided by Bing. In general bing has a pretty marginal market share. They also have a much smaller index. One that is refreshed far less frequently than Google. What you are seeing is about par for the course at the moment.