Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
Unexplained Drop In Ranking and Traffic-HELP!
Hi Kingalan - definitely a frustrating experience. Let me see if I can provide some thoughts on each of your questions: #1 - Yes, it's certainly possible that links from bad sources could have propping up your rankings, and by disavowing these, you've lost rankings/traffic in the short term. However, I'd agree with your SEO consultants that pain now from this action is better than the potential penalty/banning you might experience in the future. Google has been very aggressive with penalties, but they haven't been wholly consistent. This makes bad links something that can provide short-term opportunity and long-term cataclysms. If removing these links is what hurt you, I'd argue it was the right choice to make, and getting some new, editorial, high-quality link sources is the next step. #2 - My guess would be that extra indexation of a few hundred pages has nothing to do with the rankings/traffic changes. I've seen Google index thousands or even tens of thousands of extra pages without much problem - a few hundred are very unlikely to be the cause of the problem. That said, I'm not sure removal would be my first step - I might think about how to canonicalize these back to pages you do want indexed (if you do want that content discoverable). If you really don't want the content findable in Google, then meta robots noindex might be worthwhile. #3 - It is possible that thin content is to blame here. I agree it's hard to scale quality content, but keeping a few hundred pages up to date and incredibly useful for visitors/searchers is exactly what Google wants to see. I'd be constantly asking the question - is my page the most valuable one in the search results? Does it provide a better, more useful experience than anything else in the top 10? If the answer is no, then you don't really deserve to rank (don't worry, many sites don't), and extra effort here may go a long way. One way to do this might be to ask those who submit listings to give you more content (or to get agents/interns/writers/contractors to bolster each listing). p.s. You may wish to check out http://moz.com/blog/why-you-might-be-losing-rankings-to-pages-with-fewer-links-worse-targeting-and-poor-content Wish you all the best, Rand
| randfish0 -
Ticket Industry E-commerce Duplicate Content Question
Are you asking about why Moz would or would not consider them duplicate, or why Google would or would not consider them duplicate? We don't have knowledge of exactly what factors Google uses to detect duplicate content. We approximate the best we can, and give a notice when we see things that look substantially alike. I did get the two pages in the original question to load. There are only a couple of words that are different, and the rest of the content is identical, including the related events. As a side note, I don't know that people looking for MLB tickets would consider "The Mu Gamma Gamma Chapter Seersucker and Sundress Summer Affair" to be a related event.
| KeriMorgret1 -
Should I delete 'data hightlighter' mark-up in webmaster tools after added schema.org mark-up?
Ah, ok. My mistake, I didn't drill down enough. One thing I did notice: you have authorship markup on those product pages as well. That should be removed. According to Google's guidelines, for product pages that are not specifically written/constructed by an "author," that markup should not be there. Rel="publisher" is the only necessary markup for non-blog or article content. The schema markup you've implemented looks good in the page source, and checks out as being correctly implemented (without any duplicates) using Google's Structured Data Testing Tool (found in Google Webmaster Tools). It appears the data highlighter markup is not causing duplicates. I'd recommend double-checking all the product pages you've added schema to that you originally had from the data highlighter markup. There may be duplicates, there may not. To be honest, I've always gone right to schema.org, but checking for duplicates should be the only thing you should have to worry about. Good luck!
| BradyDCallahan0 -
Moving blog to a subdomain, how can I help it rank?
That's great, thanks for the info! Do you know of any resources off-hand about "generally not [listing] more than two of these subdomains in the search results" for more detail?
| DigitalMoz0 -
Is WordPress a Blog in the eyes of Google?
Ahh ok I follow you! Well, we haven't run any actual experiments using 2 online shops like the example you mentioned (one using Woo & the other using another platform). But we have taken existing OpenCart & Magento sites that were already ranking for competitive terms and migrated them over to WooCommerce. The rankings dipped for a short period but recovered and have since superseded the rankings they had before. I was skeptical of WooCommerce at first but it has become our preferred solution for eCommerce. I'm not saying the others are bad or anything. To answer your question, I don't think it would make a difference if you used one platform over another if they were both structured the same and had similar social engagement, etc. As long as you keep things RELEVANT and keep the site geared around your target audience needs, you should be fine either way. Hope this helps!!
| Bryan_Loconto0 -
Local Google Place Ranking loss
Thanks, Pigeon could really be the problem since the ranking lost happend at the same time.
| remkoallertz0 -
Can Googlebots read canonical tags on pages with javascript redirects?
I think that it's generally understood that bots read page code, they don't "do" page code like the browser does, so the canonical is read, regardless of the other code on the page. This wouldn't be the case for 301's that trigger server side, the bot would then never reach the page to read the canonical. HTH
| Jason_S0 -
How to deal with URLs and tabbed content
You can use the hashtags and query strings to display the content, just be sure to specify the URL parameters in Webmaster tools so you dont have multiple URLs getting indexed.
| David-Kley0 -
<aside>Tag Use</aside>
I have not seen any guidelines laid out. It is important to note that <aside>tag is a HTML 5 element and as such highly likely every crawler will handle it differently. The purpose to the best of my understanding is to tell the crawlers this content is not exactly what my page is about, allowing for places on the site the owner can advertise or cross link similar but necessarily different content. Which if I understand it correctly gives every crawler / bot the perfect way to weight the <aside>tag differently in their algorithms. Maybe Dr. Pete will run a test on Moz for us. Here is a good little read on the aside tag: http://www.html-5-tutorial.com/aside-element.htm </aside> </aside>
| donford1 -
My site has a loft of leftover content that's irrelevant to the main business -- what should I do with it?
The end goal should be to leverage as much benefit from the pages as you can for your existing business direction without confusing or tricking your visitors. The approach to your issue might depend on what type of content this is. For example, if your content consists of well written articles that are relevant to the visitors that are still reaching these pages, you might try and leverage that traffic through some well placed call-outs or advertisements for your new service/products/blog. The visitor gets the information they searched for, and you have a small potential to get some benefit from your past work. If your content consists of eCommerce pages for outdated or discontinued products and you have similar or replacement products, 301 these pages to your newer relevant product. Consider explaining to the user why they have been sent to a page containing a different product and that the new product is the replacement for the older model. If you don't have replacement products, and your pages aren't useful content, then you have to ask yourself why you care about the pages getting a lot of traffic. You are hesitant to cut it off because hey, people are visiting the pages and that helps your site overall right? Not if it's bringing your site traffic that doesn't convert. If you're trying to optimize your site for a completely new set of terms, the old pages shouldn't hurt you too much as long as you are handling them correctly. Google doesn't like it when you try to trick it into thinking that a page/site is about subjects that it's not really about. You don't mention anything about the new business but I am curious, if it is such a different direction, why not a new website altogether?
| Jason_S0 -
301: Delete old page, or keep?
That was my thought! Just double checking. And yea that occurred to me this morning. Especially since I am using a CMS there are so extra domains from the tags!!!
| HashtagHustler0 -
No-index pages with duplicate content?
We recommend to such clients that they apply the robots noindex,follow meta tag on the duplicated pages until they get rewritten. We aim for 20% of all products on the site to be completely unique in content, and indexable. The other 80% can be rewritten gradually over time and released back into the index as they are rewritten. So to answer you question: Yes, I think your plan is perfectly acceptable, and is what I would do myself if I were in the same situation.
| Everett0 -
Magento Hidden Products & Google Not Found Errors
I also have this same issue.... looking for a solution.
| maartenvr0 -
Why isn't my uneven link flow among index pages causing uneven search traffic?
Thanks Everett, I appreciate it!
| GilReich0 -
Duplicate Title Tags Generating from Previous Owner's Content
If you remove the URLs using the tool they will stop ranking as soon as Google trys to crawl the pages again. This may take a little a little while depending on how many pages are currently linking to the pages that you want removed. Unfortunately there is no way to get them removed with immediate effect, though it is unlikely that a few duplicate title tags will have a huge effect on rankings.
| Jamie.Stevens0 -
Why does old "Free" site ranks better than new "Optimized" site?
Hi Kingof5 - I know, it is bizarre. We inherited the new site and are now tasked with the challenge of cleaning everything up. I don't know the background on why redirects weren't put in place but that's what I'm working toward now. Wish me luck.
| WhatUpHud0 -
HTTPS pages - To meta no-index or not to meta no-index?
Hi Jamie, If you don’t need the http version accessible and want to force the https you could simply redirect all traffic to the secure site with a 301, transferring all your pagerank to the main site. If you need both versions of the site accessible, for instance if you only needed https for logged in users, and you only want one version to appear in SERPs the best thing would be to use a canonical tag to consolidate all that SEO juice into the version you wish to rank. If there’s only a few secure pages with links to other non-secure pages then meta robots noindex,follow would work well, since the SEO juice will flow through those noindexed page and into the rest of your site, but if the whole site is duplicated on both versions this could be a big mistake. No-indexing an entire https version would be a bad move even if you were using noindex,follow since your internal linking will be to the secure pages. Even though pagerank will be passed through those pages it will eventually come to a dead end or leave through an external links. With the canonical tag, any links pointing to your secure version will pass their SEO juice to the non-secure site, rather than be lost in the noindexed site where it has nowhere to go. Have a little read of this interview with Matt Cutts a few years back for further clarification, it’s got a good quote about how PR flows through noindexed, followed pages: http://www.stonetemple.com/articles/interview-matt-cutts.shtml Matt Cutts: A NoIndex page can accumulate PageRank, because the links are still followed outwards from a NoIndex page. Eric Enge: So, it can accumulate and pass PageRank. Matt Cutts: Right, and it will still accumulate PageRank, but it won't be showing in our Index. So, I wouldn't make a NoIndex page that itself is a dead end. You can make a NoIndex page that has links to lots of other pages. So it’ll be different depending on your circumstances but if you’re in doubt, the canonical tag is your best bet as you’re only consolidating those pages in googles eyes. If those pages perform well and you noindex them without sending that PR somewhere useful you could be throwing away all that benefit. Hope that helps, Tom
| TomVolpe0 -
Image URL Change Catastrophe
The drop also appears in Webmaster Tools, including a commensurate drop in Impressions.
| wantering0