You mean you want to get rid of them? Normally they are valuable to help Google index older blog posts of yours. If you want them not to show up in search results, you could set them to noindex, follow.
Posts made by derderko
-
RE: To many warnings on duplicate pages or on-page links( -ve ranking i think)
-
RE: On site optimization - anchor text in body text
Hi Dan,
it's generally a good idea if you identify a (unique) primary keyword for each of your pages. Am I understanding you right, that you have a bunch of producer pages, which you want to have ranking for the name of the producer?
Then I'd say the primary keyword is the name of the producer and I would try to link to those pages with the primary keyword. This helps Google in understanding what the page is about. Slight variations of the keyword are appreciated. Also you could have some "click here" anchor texts (which is totally natural in your anchor text distribution).
Just avoid the following thing: assume you'd be selling cars and you link to /Porsche sometimes with the anchor "Porsche", but other times with "Ferrari". This throws Google overboard and it can't tell if it should rank this page for Porsche or Ferrari. If you'd link with "see more cars" Google will figure this is off-topic.
HTH,
schuon -
Schema for Price Comparison Services - Good or Bad?
Hey guys,
I was just wondering what the whole schema.org markup means for people that run search engines (i.e. for a niche, certain products) or price comparison engines in general.
The intend behind schema.org was to help the engines better understand the pages content. Well, I guess such services don't necessarily want Google to understand that they're just another search engine (and thus might get thrown out of the index for polluting it with search result pages).
I see two possible scenarios: either not implement them or implement them in a way that makes the site not look like an aggregator, i.e. by only marking up certain products with unique text.
Any thoughts? Does the SEOmoz team has any advice on that?
Best,
schuon -
RE: Mask links with JS that point to noindex'ed paged
Well, we just want to show less links to Google than to the user (but the links for Google are still a subset of the links shown to users). The links we'd do as JS links are those to less often applied search filters, which we don't index in order not to spam the search index.
Fortunately, if Google is smart enough in decrypting the links it wouldn't do any harm.
Thanks for our ideas tough! Especially the site: thing I considered myself, it really takes ages until something is de-indexed (for us, using robots.txt did speed it up by a magnitude).
-
Mask links with JS that point to noindex'ed paged
Hi,
in an effort to prepare our page for the Panda we dramatically reduced the number of pages that can be indexed (from 100k down to 4k). All the remaining pages are being equipped with unique and valuable content.
We still have the other pages around, since they represent searches with filter combination which we deem are less interesting to the majority of users (hence they are not indexed). So I am wondering if we should mask links to these non-indexed pages with JS, such that Link-Juice doesn't get lost to those. Currently the targeted pages are non-index via "noindex, follow" - we might de-index them with robots.txt though, if the "site:" query doesn't show improvements.
Thanks,
Sebastian