How to stop downward drift
-
Robert,
EGOL and Robert are both right.
There are some things you can do to stop the drift, and possibly reverse it, but that is more effort
I'm out with my ipad, so the formatting of this may not be good.
First, look for content that is 100% yours and see if that is falling too.
- let me know what you find.
Fix the red group problems that SEOMoz tells you about
Then
1. go to your WMT account and see if there are any problems listed. - fix them
2. check to seee if you have any outgoing links that are boroken - fix them.
3. check for duplicate titles and descriptions and content. - fix them or delete and redirect.
4. rewrite posts that are not uniquely yours - this is the hard work, especially if you have a lot of pages
5. Get a G+ profile if you don't already have one and link your site to that and link that to your site.
-
Interesting. What would compel the Oracle to grab the block of text and do such a thing?
-
I have some pages that once performed well and suddenly started to decline. They had been on my site with product descriptions that I had written a few years ago. I looked for dupe content and found that lots of domains that feature products that are "made in China" had copied my text. I don't know if that was the cause but maybe.
I also have a couple articles of a couple thousand words on a topic that currently is getting a lot of search. I noticed that my long tail traffic for these pages was declining. I found that a lot of spam sites (dozens and dozens) had grabbed a few sentences from my site and a few from several other sites, slapped them together and were now ranking for very long tail queries.
-
I have 540 sites that are grabbing my jobs and re-posting them without permission. I hope I am not in line with getting hit by Panda.
-
EGOL,
If you haven't already done this, consider my #5 in the answer below.
-
Boodreaux
If you haven't already done this, consider doing my #5 in the answer below.
That should brand your ID into your text and photos
Here is what happens with google - which is not as smart as we (and they) think they are.
If you take a snippet of text that is keyword rich, and slap it on 10 other sites, they will almost always (in google search) outperform your whole unique story.
-
Thank you Alan. I think that is a really good idea.
On the site with the long articles, all of the content that I have written appears there without attribution. I have hundreds of articles on the site and enjoy them being anonymous.
However, I know that your suggestion might fix or reduce the problem. I've thought about doing it in the past. I need to put some thought into claiming the content. It would probably increase my income.
Thanks for making me think about this again. You deserve more than one thumbs up for your reply.
-
Alan, that is part of what I believe Google really wants. With this and rel=auth there should be the ability to at least begin to mark ownership and provide a timestamp type trail. By virtue of the content originally linking to EGOL, it is then "his." That won't stop duplication of the content. It likely provides a point of comparison however in the who wrote it first equation.
I just read through it the first time you posted and thought it was good. When you pulled it out in relation to EGOL's response I thought, Duhhh, I missed the importance in the moment. Great Job Alan.
-
If you take a snippet of text that is keyword rich, and slap it on 10 other sites, they will almost always (in google search) outperform your whole unique story.
Alan, I'm missing something or not getting it (perhaps both).
I understood there was an "indexing time stamp" in Google that helped Google identify the original content and therefore punish those that scraped it.
Is this not the case? I thought Panda was supposed to enhance that ability rather than do just the opposite and punish the origins of the content.
-
"indexing time stamp"
I can say with absolute confidence that an "indexing time stamp" does not exist with Google - or - it does not work at least 1/2 of the time.
I thought Panda was supposed to enhance that ability rather than do just the opposite and punish the origins of the content.
From what I have seen, Panda is a domain-level throttle that impacts sites that trigger an invisible trip wire. The presence of duplicate content (you duped somebody or somebody duped you), slapping visitors faces with ads are possible locations for the wire.
Google has no reliable way to know the originator of content unless author pages and recip rel="me" links are in place - and those can be forged by others.
-
"took a hit": like, permanently losing 25% directly upon Panda's US release, and another 12% upon its worldwide release. It was quite obvious.
SEOmoz and GA show no evidence that rewritten pages do any better than the more-or-less static ones (no pages have been totally static). The three pages that are less than a year old have pretty much failed to register at all with Google, even though they're in the index, have social links and rate A w. SEOmoz page analysis. (This doesn't count devotionals.) Two of the pages are on what has become pretty much a hot topic in the media, too.
I'm hoping that the blog (coming in a few weeks) helps. But I still need to know how to kill the site's overall downdraft - roughly 80 visitors a week lower each of the past 4 weeks. And pages that have been top 10 are now dropping 10-20 positions from where they were, despite content and page analysis improvements.
-
I'm still looking for some working answers.
-
My best guess at your problem is that your site has a lot of content that has been copied by other websites or has been republished from other websites.
The result is that you are being hit by the panda filter. That usually results in a site-wide reduction in search rankings.
I have seen this on one of my sites where we republish articles at the request of educational institutions and government agencies. We removed a lot of that content from the google index with the following line in the of the html code.
name="robots" content="noindex, follow" />
Rankings went back up after a lot of those pages were deindexed (but deindexing those pages cut a lot of our potential search traffic)
No guarantee that this is your problem. Just my best guess.
Read Alan Gray's answer and follow his 5 suggestions if you want other actionable ideas.
I believe that your problem will require surgery and hard work.
-
Also, don't wait for google to tell you if you have duplicates.
That is what I was doing - silly me!
I did a comprehensive headline check and I discovered that there were more than a thousand duplicates in our system because back a few years, one of the editors was doubleclicking the mouse when publishing stories and that was before I put in a mechanism to prevent them. I thought there were only occasional ones and I'd fixed them, but it seems I didn't!
So check every page on your site for duplicate titles, duplicate descriptions and duplicate content.