Is 100% duplicate content always duplicate?
-
Bit of a strange question here that would be keen on getting the opinions of others on.
Let's say we have a web page which is 1000 lines line, pulling content from 5 websites (the content itself is duplicate, say rss headlines, for example). Obviously any content on it's own will be viewed by Google as being duplicate and so will suffer for it. However, given one of the ways duplicate content is considered is a page being x% the same as another page, be it your own site or someone elses.
In the case of our duplicate page, while 100% of the content is duplicate, the page is no more than 20% identical to another page so would it technically be picked up as duplicate.
Hope that makes sense?
My reason for asking is I want to pull latest tweets, news and rss from leading sites onto a site I am developing. Obviously the site will have it's own content too but also want to pull in external.
-
Depends on the ratio of unique content to duplicate content,
If you got 1000 unique words and 250 words from rss feeds then id say your be ok, others may disagree.
The term for this is called scraping content, google it and have a read.
-
thanks for the reply. I'm more than aware of what it is. how it works and what it is called.
This is more a question about can you make content which is not considered duplicate by using only duplicate content. If google works on duplicate being a part of a page matching a single other then I think you can.
-
So your asking can you mix up rss data to try and make it look unique in Google's eyes and benefit from it.
It will be seen as duplicate and when you have the rss links point back to the resource im sure Google will work that out.