Questions
-
Can I use a 410'd page again at a later time?
I've tested this a couple of times and each result has been: no. If you 410 a page, Google does a bloody good job at never bringing that URL back. I've changed the response code back from a number of 410s, socially shared the page, built natural links to it, even blasted links to it (for testing's sake of course!) and Google has never reindexed it. It's both good and bad - good in the sense that it really uses the 410 response code properly, but bad in the sense that, if you make a mistake and want to reuse the URL, you're pretty screwed. If you're going to use it, be absolutely 100% you wouldn't see any use in using that URL again (such as offering the product again in the future etc.). If there is any doubt, leave it as a 404 or redirect it.
Technical SEO Issues | | TomRayner0 -
Penguin 2.1: How to recover?
Yeah, technically, you're supposed to make a good faith effort to remove the links, but Google isn't clear on how many, how much effort, etc. This seems to hold true for Penguin in addition to manual penalties. Most of the Penguin recovery stories I've head have involved pretty deep cuts, to be brutally honest.
White Hat / Black Hat SEO | | Dr-Pete0 -
Do some sites get preference over others by Google just because? Grandfathered theory
Hello, I understand clearly what you mean, and I can say I was on the other side. I was among the first ranking in a smaller country for the most popular "blog" related queries. The page was a part of a more general website, was very solid and solved most of the problems for users searching for those particular queries. Most of the people were clicking my search result and they were pretty satisfied by what they got there. The website was old school, layout old scool. I had amazingly attractive title and meta description, I basically nailed it. Then bigger brand names and huge websites were launched on the same queries. Solving way more problems and dealing with the matter in a new way. But I was still the first. Brands with pagerank 5 to 7 were competing with my page with no/or close to zero pagerank. I did not even have a fraction of their links and authority. I even laughed seeing that year after that I rank 1st above these big guys, they were on 2nd, 3rd and so on. A lot of years passed and I was still the first. It was really funny. And I tried to learn from it. Then I decided to refresh and modify the layout, because it was old school. I had some problems with internal linking and domain was down for a while. Then somebody hacked my server and I got some stuff injected there. I solved most of the problems, it was not easy. But when I got back I lost the top spot. There were a lot of changes, but the URL and the content of that particular page was exactly the same. So, from personal experience I can tell you that things can change. I had the following: I was the first to cover that area a lot of users were clicking on my website - CTR from webmaster tools was amazingly high, and bounce rate was low And I can tell you that one of those ranking factors talked about a lot on seomoz - "User Usage and Traffic/Query Data" weigh way more than people think. At least from my experience. Anyway, try to ask yourself the following questions: -> Are the differences between your website and the old one significant? Do the users see them in the search listing and do they consider them of significant importance? If not, try to give them a 10th times better reason to click on your website, and also give them what they want (sometimes the bounce rate has something to say about this, but not always). It may look like Grandfathering, it's really hard to dismiss or confirm it. At first I thought about it the same way you do. However, first it would be very nice to answer honestly and from the user's point of view to the above questions. Good luck!
White Hat / Black Hat SEO | | zoicaremus0 -
Back Link Removal Companies: Which is the best?
You guys need to talk to Ryan Kent. He is the master of link penalty removal. I sent him clients before. Here is his info: http://moz.com/community/users/312503 Notice he is #2 in all MOZ? Go with him.
White Hat / Black Hat SEO | | Francisco_Meza0 -
Wordpress Canonical Tag Pointing to Same Page
There may be a plug in to place rel=canonical tags on the page. Sometimes, when the plug in field is left blank, a self referencing rel=canonical tag is placed on the page. I wouldn't worry about it too much, it shouldn't harm your site. From: http://moz.com/blog/dispelling-a-persistent-rel-canonical-myth "Looking through Google's blog post on the subject, this isn't explicitly stated. However, you can see that even the example website, Wikia, employs this practice on the page Google points out. You can also see Googler Maile Ohyeanswering a comment on this: @Wade: Yes, it's absolutely okay to have a self-referential rel="canonical". It won't harm the system and additionally, by including a self-reference you better ensure that your mirrors have a rel=”canonical” to you."
Search Engine Trends | | Schwaab0 -
Thin Content Pages: Adding more content really help?
Just saying what we did... We had a site that was hit by panda. We had lots of very short news blurbs and some republished content from government agencies and academic institutions - much of that done by their request for exposure to our visitors. Immediately after the hit, we noindex/followed or deleted/redirected the republished content. We also noindex/followed or deleted all of the short content. The site got out of panda a few weeks later. Some traffic loss but not substantial As for improving short content. We have done a lot of that. We had lots of very short descriptions of two sentences plus one or two images that were getting nice amounts of traffic. We improved those to a few hundred words and two or three images (very time consuming, very expensive - a few hours per page. The rankings for short tail queries went up nicely and there was a huge increase in long tail traffic. We later started improving the few hundred words plus two or three images to one to two thousand words plus four to eight images - even more time consuming - a day or two per page. Again, rankings and traffic go up nicely. Today, for each new article that I publish, I am making a huge improvement to a page that is a proven traffic getter but could be improved a lot. For you, take a look at the traffic into those 2700 old articles prior to your panda problem. Some might not be worth much, but others might be golden. Then decide what to delete/redirect, what to noindex/follow, and what to improve. Then begin working. Good luck.
White Hat / Black Hat SEO | | EGOL0 -
URL structure: 301 redirect or leave as is?
Completely agree, it will be a lot of work if you have a high number of pages but definitely worth the effort in the long run!
White Hat / Black Hat SEO | | stever9990 -
Sitemaps: HTML and/or XML?
If you have got a large website with 100's or 1000's of pages then you can prioritise which pages Google should see first in your XML sitemap. Your HTML should sit in the footer of your website and is important to have because it should increase the speed at which Google sees all your pages on the website. I always recommend having both XML and HTML
Intermediate & Advanced SEO | | KarlBantleman0 -
Preserving URL Structure from Os Commerce to Magento
How many products does your store have? Magento have an option call "URL Rewrite Management" (Catalog > URL Rewrite Management) If you have a few products you can change the URLs manually to match OsCommerce. If you have a lot products you should: 1 - Export SKU and URL from OsCommerce to CSV. (ask a PHP developer to make script for you) 2 - Import the spreadsheet to magento with the columns named: SKU and url_key If your site is really big you will need a third part magento extension to import the categories. Magento do not have any native function to import categories.
Intermediate & Advanced SEO | | Felip30 -
Pagination for Search Results Pages: Noindex/Follow, Rel=Canonical, Ajax Best Option?
Firstly, read http://searchengineland.com/the-latest-greatest-on-seo-pagination-114284 for the basics on addressing this problem. It was noted in the other response but it's key that you approach it this way. Its common but easily fixable. On your other note, robots read everything on the page, content included. They may not index any of it (considering it's on a NOINDEX page), but the absolutely read and crawl everything. And yes, naturally they follow the links on a FOLLOW page. They won't on a NOFOLLOW and will look elsewhere for links to follow. Hope this answered your question. Let me know if not.
White Hat / Black Hat SEO | | Ikusa0 -
Noindexing Thin Content Pages: Good or Bad?
Sometimes you need to leave the crawl path open to Googlebot so they can get around the site. A specific example that may be relevant to you is in pagination. If you have 100 products and are only showing 10 on the first page Google will not be able to reach the other 90 product pages as easily if you block paginated pages in the robots.txt. Better options in such a case might be a robots noindex,follow meta tag, rel next/prev tags, or a "view all" canonical page. If these pages aren't important to the crawlability of the site, such as internal search results, you could block them in the robots.txt file with little or no issues, and it would help to get them out of the index. If they aren't useful for spiders or users, or anything else, then yes you can and should probably let them 404, rather than blocking. Yes, I do like to leave the blocked or removed URLs in the sitemap for just a little while to ensure Googlebog revisits them and sees the noindex tag, 404 error code, 301 redirect, or whatever it is they need to see in order to update their index. They'll get there on their own eventually, but I find it faster to send them to the pages myself. Once Googlebot visits these URls and updates their index you should remove them from your sitemaps.
White Hat / Black Hat SEO | | Everett0 -
Dofollow Links on Press Releases: Good or Bad?
Well said as usual, Takeshi. Hell I'm thinkin' about skipping the PR all together now and just sending this article out to some online publications instead... we'll see.
Search Engine Trends | | jesse-landry0 -
Using a sites custom code for multiple websites: good or bad?
+1 to Davanur. We use the same frameworks and core blocks of code across several sites. When the functionality is identical it clearly makes development a little easier because you don't have to reinvent the wheel each and every time. However, it is only the content that matters in SE eyes. Otherwise, everyone other person on here with the wordpress or joomla theme would get crushed by penalties because of a lot of the themes use the same frameworks like Gantry, Warp, Zend, etc.
Search Engine Trends | | AaronHenry0 -
NoIndexing Massive Pages all at once: Good or bad?
If you're not currently suffering any ill effects, I probably would ease into it, just because any large-scale change can theoretically cause Google to re-evaluate a site. In general, though, getting these results pages and tag pages out of the index is probably a good thing. Just a warning that this almost never goes as planned, and it can take months to fully kick in. Google takes their sweet time de-indexing pages. You might want to start with the tag pages, where a straight NOINDEX probably is a solid bet. After that, you could try rel=prev/next on the search pagination and/or canonical search filters. That would keep your core search pages indexed, but get rid of the really thin stuff. There's no one-sized-fits-all solution, but taking it in stages and using a couple of different methods targeted to the specific type of content may be a good bet. Whatever you do, log everything and track the impact daily. The more you know, the better off you'll be if anything goes wrong.
Intermediate & Advanced SEO | | Dr-Pete0 -
Best way to remove low quality paginated search pages
According to this article http://www.seroundtable.com/farmer-headers-13111.html It sounds like I should be 404ing these pages since I never plan to re-writer them and I want them removed from my site and from the index. According to this article http://www.seroundtable.com/google-robotstxt-advice-12759.html They believe you shouldn't use robots.txt.. Anyone know the best option in this situation? Should I just 404 a handful of the 40k pagination pages every week/month until they are all 404'd?
Intermediate & Advanced SEO | | WebServiceConsulting.com0