Lesley,
Thanks for the response.
If we scripted the page so Google would ignore the content, I'm afraid we'd be in nearly the same boat we're in now. As in, we'd have no content on the page and wouldn't rank for anything.
While it would effectively "solve" the potential rotating content issues and penalties, we wouldn't have anythign to rank for.
Gary,
Thanks for the helpful response!
1. How would we run into internal duplicate content issues? These 3 results (in full) would only be found on this specific page, they'd just be rotating.
I will say that the way these results pages are structured includes snippets of content that can be found on each results individual page, e.g., a snippet of Frankenstein's plot will show on the results page, and once clicked, will show the full entry. So there's going to be some duplicate content. That shouldn't be a huge deal though?
2. That's exactly the reason I hate this. Even if Google didn't get pissed, we wouldn't have static content (keywords, longtails) to build authority and rank for.
Idea #1: I actually have this prinicple written down, but slightly different. If we had a link at the bottom of the results in Javascript to "shuffle" or "refresh" the content, the user would get the benefit, but since it's not a new page, Google couldn't crawl it. So they'd only randomize on command, but stick with the initial 3 on pageload.
I was also toying with the idea of locking 2 of the results and only shuffling the 3rd, that way there's some semblance of continuity to the indexing and we'd always be working towards that content ranking. Thoughts?
Are you saying with SCHEMA we can "hide" the additional/rotated results initially to the user, but Google sees it immediately? If so, please elaborate or send me a link since this is interesting!
Idea #2: The snippets actually lead/link to their static pages on their own URL (this is the only duplicate content I believe) so that's fine, but yes, we aren't concerned with the static pages ranking, only the grouped together results.