Is there a way to make Google realize/detect scraper content?
-
Good morning,Theory states that duplicated content reduces certain keywords’ position in Google. It also says that a web who copy content will be penalized. Furthermore, we have spam report tools and the scraper report to inform against these bad practices.In my case: the website, both, sells content to other sites and write and prepare its own content which is not in sale. However, other sites copy these last ones, publish them and Google do not penalize their position in results (not in organic results neither in Google news), even though they are reported using Google tools for that purpose.Could someone explain this to me? Is there a way to make Google realize/detect these bad practices?Thanks
-
Theory states that duplicated content reduces certain keywords’ position in Google.
Wrong. Google might omit duplicate results or ban sites practising it, but it doesn't lower rankings based on number of duplicates or something. Otherwise wikipedia or any aggregating websites like car dealers etc would be nowhere to be found.
It also says that a web who copy content will be penalized.
Semi-wrong. It will be penalized if it's spammy and overdoing it.
Watch this video of Matt Cutts on duplicate content - https://www.youtube.com/watch?v=mQZY7EmjbMA
So, my understanding is that there is no 100% working way of getting down scrapers, because some of them are actually "good" scrapers. Like Facebook! - the biggest scraper in the world.
So, to beat them in rankings, just make sure that you are an authority in your industry, have awesome backlink profile and all aspects of SEO are properly implemented. And yes, sometimes those penalization tools can help.
-
Hello,
The reporting tools are not particularly useful in this scenario as duplicate content is not a penalty-worthy situation. While Panda is used to destroy spam-oriented content, duplicate content is treated as more of a null/void situation than as a penalty.
For example, when you place your newly-created original content and it is crawled and indexed, Google attributes your domain with being the origin of said content. If another website showcases this content, it is recognized as duplicate by Google (which has compared it to your indexed version) and given no benefit or penalty. In effect, using duplicate content is merely a neutral practice - it's the spam that Google is really after.
Here's a beginner's report on duplicate content that spells it out quite nicely:
https://moz.com/learn/seo/duplicate-content
As Charles mentioned, copied content is not an automatic ban sentence. If it is within "acceptable limits" there is not a detrimental impact to the website. However, if the website is made up of purely copied content from multiple sources, and spams links or keyword stuffs, it will be dealt with accordingly.
In short, this website will not be penalized in the fashion you desire unless they are spamming or keyword stuffing (among other penalty-worthy offences). Your best bet is to beat them out by building up your link profile and continuing to post valuable, original content.
Let me know if there is anything else I can help with.
Rob
-
Stolen content is a big issue today and recent reports have shown that people who steal the content from you will usually knock you out of your search engine position, no matter what your authority, backlink, or social share profiles look like.
This great presentation given by Jon Earnshaw at Brighton SEO last week gives a better idea of how it has affected other websites : http://www.slideshare.net/jonathanearnshaw/is-your-content-working-better-for-someone-else
Google use to have a Scraper report that you could file the offending site and get it removed from the SERPS but they have removed this.
I found a similar way to report the stolen content on this blog post :
http://www.techng.info/removing-your-stolen-content-from-google-search-using-dmca/
Hope this answers your question, even if it is a bit delayed from the original post

-
I've found backlinks in scraper websites linking to the scraped website I am taking care of.
They are in css, images, forms.
What's the point in doing it on their side?