Questions
-
How does google recognize original content?
Some believe that the code of your website is taken into consideration by Google. This basically implies that duplicate content only applies to the creation of multiple blogs all coded the same with the same text. This was a tactic used by many using automated software. This is just a rumor and from personal experience, movie news blogs and website tend to churn out identical news stories including pictures, video and text. I have not seen any of these sites being held back in their rankings.
Intermediate & Advanced SEO | | FlashBangSEO1 -
Almost no organic traffic
Category pages can cause duplicate content, but major categories do often have search value, so it's a trade-off. Typically, it makes more sense to go after duplicate URLs (like product options), search filters, sub-categories, and things like that. It depends a lot on the scope of the problem, though. My gut reaction is that technical SEO isn't the core problem, though. Ultimately, search traffic doesn't just happen these days. You do need links and social mentions, and you need to actively market and promote yourself to start ranking for non-brand terms. There's no on-page trick to that. Something like schema can help your listings stand out, but it's not going to magically help you rank for terms you don't currently rank on. Without understanding the site or industry, it's really tough to give advice on where to start, but trying to control how link equity flows through your site only makes sense when you've got a solid amount of link equity to work with. Actually, I wrote about this general issue a while back - you may find it useful: http://moz.com/blog/whats-better-on-page-seo-or-link-building
Intermediate & Advanced SEO | | Dr-Pete0