Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
Did I get hit with a panda update?
It isn't like we have no traffic, we still get thousands in organic traffic a day. It just isn't as much as it used to be. We are the largest in our industry, not near the size of Amazon or Ebay, but still pretty large. So we have customers who exclusively sell through us. Most of our sellers are small business and don't have in house technical teams, so it is easier working with us. I understand content marketing and we are starting to do that but if we have been hit with Panda I don't know if it matters.
| EcommerceSite0 -
Home Page Authority
It's best practice to only have one home page variation. Google won't be able to tell which is your main page and you will also get duplicate content issues etc. If you haven't already I would redirect the /default.asp pages to the root domain. As for your original question it does seem a bit odd that they are different scores but I would imagine it's because your main root domain would have more backlinks to it compared to your /default.asp pages.
| O2C0 -
How to deal with everscrolling pages?
Yes, the pages are built just like stated in Google's article. And next/prev doesn't make sense, as the articles are not related. Canonicals are in place and direct to the category or tag page. The problem is that Google keeps on spidering those pages and signals 404- or 500-errors when there's changes or the server is busy. So we agree, I'll put on a noindex/nofollow on the "load more"-button. Thanks Dirk!
| corusent1 -
What happens to 301 redirect if the site taken down?
Yes, it would be lost. Yes, you should keep it active.
| TheeDigital0 -
Advanced: SEO best practice for a large forum to minimise risk...?
Hi Seomvi - yes, definitely a challenging problem, especially since you're thinking preventative rather than reactive (which is very wise!). My advice would be to consider creating some form of threshold for forum content before you expose it to Google. For example, you could have a litmus test that says, if a forum thread has <500 words or fewer than 2 unique replies, apply a META NAME="ROBOTS" CONTENT="NOINDEX, FOLLOW" to the page header. In that fashion, you keep algorithms like Panda from perceiving your forum as having lots of thin content/low value pages. For PR flow and crawl budget, I'd generally worry less. Google's gotten very adept at identifying forums, crawling them effectively, and understanding how to handle that type of content/link structure. That said, you might try using rel=prev/next to help with Google's crawling. Wish you all the best!
| randfish1 -
Content Cannibalism Question with example
Google will frequently rank two pages from the same site in the same SERP if they feel that both pages serve the user intent of the query. Often this will happen, as is the case with these two pages, when they are two pages that are on the same topic, but answer slightly different questions - either of which could be what the user is really asking, if that makes sense. In your example, the two pages that Google is serving up are answering closely related, but slightly different questions: "What is VVS diamond clarity" and "what is the difference between VS and VVS diamond clarity." It might be advisable for this site to combine the two pages, if (for example) the wrong page was ranking for the query or one page was getting all the traffic and the other wasn't getting any. Another solution would to make them more different from each other, rather than tackling two long-tail variations on the same overall topic. I would not recommend creating two pages on long-tail variations of the same topic on purpose to try to lock down two spots in a SERP; your time would likely be better spent researching what specific long-tail topics people are searching on, and creating content to serve those users' needs. Umar does have a good point that a SERP with two results from the same domain often present an opportunity to take one of those spots.
| RuthBurrReedy1 -
Worth Improving HTML Sort Order?
Thanks for your feedback!!! Code for the site already uses CSS. What our developer is suggesting is to place the text towards the top of the source code to make it easier for Google to index. The amount of code will not be reduced (code to text ratio will not change). The site already uses CSS. So this will not speed up the site but will hopefully allow Google to index the text more efficiently. Do you think that this change may yield an improvement in ranking? Thanks, Alan
| Kingalan10 -
Re-using content
This make me think of a picture of a snake eating its own tail. You buy a domain and build all this great content and then get links to that content, then you sell the domain but take all the content. You just took away the think that built all the traffic and links, i.e. made the domain valuable. If you still own all the content, sure you can put it on a new domain but you are basically starting over in building links to the content as all the previous links went to the old domain. It is kind of a lose/lose situation here for both parties, unless you do not mind building from scratch. If you are going through with this you need to do the following. Make sure that the new owner of the domain you just sold has agreed to not republish your content. Otherwise they have the upper hand and when you republish, Google will think your stuff is duplicate. I agree with Kate, if you can, go ahead and 410 the content now on the domain you sold (but I assume still control) and request removal of it all through Search Console. Yes, this will make the new domain less valuable for the new owner, but you are already going to do that by taking all the content. Good luck!
| CleverPhD0 -
How To Implement Pagination Properly? Important and Urgent!
Hello SEO32, I apologize for the delayed response. There are several good questions here. They're also complicated questions, which don't really always have a single "correct" answer. So much revolves around the specific situation, and without seeing your website it is difficult to say what is best for you. Also, much of what we think we know about this kind of stuff is either based on what Google tells us (which isn't always the truth) and what we've observed and deduced from our own experiences (which aren't always the same). True "testing" of this stuff one variable at a time isn't always possible so we rely on best practices and our own experience. That said, I will attempt to answer your questions with what I would probably do in most situations, including links to more information when possible. Do we implement self referencing canonical URL on the main page and each paginated page? Here's what Rand says, and he's probably seen way more data than I have: "Whatever you do, DO NOT: Put a rel=canonical directive on paginated results pointing back to the top page in an attempt to flow link juice to that URL. You'll either misdirect the engines into thinking you have only a single page of results or convince them that your directives aren't worth following (as they find clearly unique content on those pages). Add nofollow to the paginated links on the results pages. This tells the engines not to flow link juice/votes/authority down into the results pages that desperately need those votes to help them get indexed and pass value to the deeper pages. Create a conditional redirect so that when search engines request paginated results, they 301 redirect or meta refresh back to the top page of results." Keep in mind that post is from 2010, and I think before Google said a "View All" canonical <a>was their preference</a>. I have seen plenty of sites do well ranking the canonical category page, and with indexing most of the product pages, while all paginated pages had a rel canonical that referenced the first page in the series (i.e. .com/category/ or .com/categry1/category2/). It probably helps that they had good XML sitemaps for product pages, and plenty of internal linking, unique content on category pages, etc. I have also seen sites do well using rel next/prev without rel canonical, or rel next/prev with self-referencing canonicals on paginated category pages. I think where you run into problems is when you also allow the facet/filter/sort versions to have self-referencing rel canonical tags. Here is what I advise in most cases: Use rel next/prev (not because I think it works, but because Google says to and I don't think it hurts) along with self-referencing rel canonical tags, and "follow,noindex" robots meta tags on paginated pages. Always include a followable link to the first page in the series from every subsequent page. For example: <previous>first...1...25...26...27...last...</previous> I recommend always having a first and last page link. The first is obvious because it means pagerank is going to flow into it from every other page in the set, giving it the most internal links of all. The last is more of a crawlability and usability thing. For users it helps us figure out how much further we have to go. It does the same thing for search engines. Instead of blindly following a path that may or may not have an end, a message is sent that tells a spider how much further it has to go. I don't know if Google takes advantage of that signal or not, but it just makes sense to include it. If you want to get fancy you can try making the 'last' link flash or javascript or something so it doesn't pass (as much?) page rank. The category root pages usually have links from site-wide navigation, unlike the paginated versions, which further establishes it as the page that should be ranked highest. Make sure the first page in each series is indexable, and has content that does not appear on the paginated versions. Also, make sure that ?p=1 doesn't have a self-referencing canonical tag, but references the root page for that series (e.g. /category1/category2/). All subsequent variations (e.g. color, size) should rel canonical back to their root page. For example: /category1/category2/?page=2&size=s&color=blue would have the following URL in the rel canonical tag: /category1/category2/?page=2 Which happens to be followable, but not-indexable, and has a self-referencing rel canonical tag. In this way you give search engines a strong signal about which URL in the whole set is the strongest (i.e. /category1/category2/) because it is indexable, has its own content, has the most internal and external links, is the simplest version of this URL pattern, and is at the root of the directory. You're telling search engines which page is next in the series, and that this page is first in the series. You're telling search engines which page is last in the series, as well. Google usually does an awesome job figuring it out from there. There are always exceptions. Do we implement noindex/follow meta robots tag on each paginated page? I would. Consider this from Google's perspective, or from that of a searcher. Someone types "Blue Flower Dress" into Google. Is the best page to return a deep category page full of blue dresses, one of which happens to have flowers? Or would it be the Blue Flower Dress product page? I can't think of any reason why I would want to land on page 3, where what I'm looking for is listed among dozens of other things, when I could just go straight to the thing I'm looking for. Likewise, if someone searches for "Blue Dresses" is the best page /dresses/blue/?page=3 (paginated page in the Blue Dresses category), OR /dresses/blue/ (the very first page of the Blue Dresses category), which also has useful content about blue dresses? Long story short, when it comes to transactional eCommerce queries, they're usually either looking for a product page or the first page of a specific category or sub-category. Or sometimes the home page. Therefore, I don't see any reason for allowing paginated URLs to be indexable in most cases. Non-transactional eCommerce content is different (e.g. buying guides, comparison charts, reviews...) but I still wouldn't allow paginated pages to be indexed in most cases. Slightly Off Topic - Filters/Facets/Sorts Or perhaps the category is "casual dresses" and "blue" is specified in the "color" attribute. In this case, would the best page be /dresses/casual/?color=blue , /dresses/casual/ or /dresses/casual/?color=blue&page=4 for someone who Googled "blue dresses"? I've bolded the one I'd prefer as a searcher. Here again, as with the internal search results, there is an opportunity to use real data to inform your decision. Pay attention to the facet/filter/sort URLs most accessed by shoppers and consider turning those into category or collections pages with their own URL pattern (e.g. /dresses/casual/blue/). One example I come across all the time is when "Brand" is a filter instead of its own limb in the category structure. If people are shopping by brand, as they do with most consumer products, then you should have a brand subcategory under each major top-level category. If I search for Levi Jeans Google doesn't want to send me to a "pants" page where I have to set a filter to see only Levis. I should go to pants/brand/levi/ . If I Google Chefmate Pots I want to see cookware/pots/brands/chefmate so I don't have to set a filter after I get there. This doesn't mean all filter pages should be turned into category pages either. Use your best judgement based on the pages most of your users are accessing from the navigation and filters. Do we include the canonical URL for each paginated page in the sitemap if we do not add the meta robots tag? I would add the robots meta tag. Please let me know if I've misunderstood the question. We have a view all but will not be using it due to page load capabilities...what do we do with the view-all URL? Do we add meta robots to it? I would add a meta robots "index,nofollow" tag, and would also use the canonical page's URL (e.g. /category1/category2) in the rel canonical tag. For website search results pages containing pagination should we just put a noindex/follow meta robots tag on them? This is one of those situations involving crawl budget potentially being eaten up by an infinite amount of pages. I would consider blocking the internal search result URLs in the robots.txt file. They are of no use to Google, as they consider a search engine returning search results with links to more search results somewhere else a bad user experience. This is also what Google recommends in their Webmaster Guidelines: "Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don't add much value for users coming from search engines." However, I would also make use of those pages internally. Rather than relying on a search result page for things people often look for, track what is being searched for and create static, indexable pages. For example, try "Collections" pages on eCommerce sites, as well as FAQ pages, or "Industries" or "Use Case"-type pages on lead generation sites. This is a much better user experience for someone arriving on that page from a search engine. We have separate mobile URL's that also contain pagination. Do we need to consider these pages as a separate pagination project? We already canonical all the mobile URL's to the main page of the desktop URL. I think you should if that's the way you're handling it. Here is a post I did on mobile best practices. It covers some other options. I would also add a rel=”alternate” tag in the HTML header of the desktop page, which alerts search engines to the corresponding mobile URL and helps define the relationship between the two pages. The bottom line for me is to always think about what would be the best experience for someone searching from Google for something, and to try and use all of the various technical options to ensure that is the page I'm telling Google they should rank for that query, or those types of queries. The 'best practice' changes, depending on the situation. I hope others will join the discussion with their own experiences and findings.
| Everett0 -
Impact of May 2015 quality update and July Panda update on specialty brands or niche retail
Unfortunately, this is an incredibly complex situation (in many cases) with no easy answer. Unlike a penalty or typical Panda update, this sounds more like a signal change favoring one type of site over another (one set of signals over another). I'm not going to say "big brands", because that carries a lot of assumptions and baggage, but there are certainly signals that tend to be correlated with more powerful brands. If Google really just decided to change their preference, there's not a lot to be done. You may have done nothing wrong, per se, and it's hard to fix something that isn't broken. In that case, you've got a few options, SEO-wise: (1) Hunt for greener pastures. You may have to find new, long-tail keywords where the bigger brands aren't playing. This is a big project beyond the scope of Q&A, but there are cases where you do need to go after new targets. (2) Re-evaluate your keywords based on impact/traffic/conversion instead of ranking. It's possible, in some cases, that big brands could dominate the Top 5, but that, for some reason, you're still getting decent CTR on certain keywords. Do that analysis before you give up on these keywords. (3) Hang in there. Sorry, it sounds like lame advice, but these kinds of updates often go back and forth, and you could see Google tweaking the mix over the next few months. In other words, whatever tactical shifts you make, don't completely cut off the pages/tactics that were ranking before (just in case). All of that said, it's often the case that the situation is a bit grayer, and Google has made this shift because of quality issues it saw across a large number of sites. It's hard to speak in generalities, but Panda updates have gradually been harder on certain types of pages, like product categories, because these are often fairly thin (search results, etc.). If all of the smaller players took a similar approach, it's possible you all got devalued at once, and there may be a way to fix that. Unfortunately, that kind of fix is really hard to advise on without at least some sense of the keywords/pages in question. I guess my main point is that it's easy to say "Google gave big brands all the rankings!" and see red, which can make you miss the few things you might have power to change.
| Dr-Pete1 -
How do we better optimize a site to show the correct domain in organic search results for the location the user is searching in?
Hi Amanda, What Moosa has explained is super important. Duplicating content across 2 different domains is putting the business at risk for duplicate content filters. General best practice for a mutli-location business is to build a single domain featuring all of your data and include in it a page for each physical location or each major city you serve. This post should help you plan a better marketing strategy: https://moz.com/blog/local-landing-pages-guide Hope this helps.
| MiriamEllis1 -
Are xlinks in SVGs crawlable?
Yep, we'll be updating sitemaps accordingly to hopefully help bots figure this thing out. I guess the core of my question is how link juice/page rank is handled for these types of links. Part of the reason we are trying to build this is to move these links out of the footer and into the page body, so I want to make sure that these links are crawlable (which it sounds like they are) and that they'll still be passing the same value on to the various regional pages the map links to.
| AaronPC0 -
Are iframes really an organic search problem?
That's right, pretty sure Vimeo works the same way
| evolvingSEO0 -
Schema for Product Categories
Hi Mike, You're correct that the Product markup is really intended for individual items, not a category of items. Under "Multiple Entities on the Same Page" here https://developers.google.com/structured-data/policies Google suggests that you mark up each item on the page individually. Other than that, yeah, not much else to mark up. Hope that helps!
| RuthBurrReedy0 -
YOAST SEO: How to set rel=cannonical tags to the original article post
Hi there, I'll try to break it down so bare with me! A canonical link is not seen by the user by all accounts they would never know about the original origins of the content. This is fine if you've made a straight up copy or want to keep all link juice to the original source. A follow link would be at the bottom along the links of "this article originates here link " or how ever you wish to create that point. The user would see this and the pro of this is you keep some more of the link juice whilst also passing some onto the source, this only really works assuming you've made some slight changes to the article e.g. commented on it or followed it up etc. etc. The choice is of course up to you but there are two options. Best of luck!
| GPainter0 -
Is it possible to find out where traffic is comming from on someone elses website?
The straight answer to this is NO! You cannot look in to analytics data and see who are visiting these pages and where are they coming from but yes you can always look in to their back link profile and see which websites are linking to those pages and guess accordingly. Just a thought!
| MoosaHemani0 -
No Follow for Social Media Buttons?
Honestly speaking, it was some other website, I would have encouraged you to set a do-follow tag but as they are social media plguins, there is no problem with that! I personally think, anything follow or no-follow both should work fine and it should any affect your SERP positions. Just a Thought!
| MoosaHemani1 -
Htaccess Issue: URL not resolving properly
The problem is the order here. Remember that redirects fire from top down. In this case, you're telling it to redirect the main page, then redirect the secondary page, so what you get is / redirects to http://www.mainsite.com/tshirts.html and then it tries to redirect /blue.html but it's already done the previous redirect, so it tries to append blue.html to this and you end up with what you did, http://www.mainsite.com/tshirts.htmlblue.html
| TheeDigital0