Category: Intermediate & Advanced SEO
Looking to level up your SEO techniques? Chat through more advanced approaches.
-
How to deal with parameter URLs as primary internal links and not canonicals? Weird situation inside...
Hmmm. This is tricky. Some ideas - hope something here is helpful: Have you tried "inspect URL" in search console? That has information about canonical selections these days and may be helpful Are the canonical URLs (and no parameter URLs) included in the XML sitemap? Might be worth trying cleaning that up if there is any confusion Cookies could work - but it sounds to me as though that would go against your client preferences as the non-cookie version would have to remove / work without parameters I think - which you indicated they weren't prepared to do Failing all of that, what about picking one category to be the primary category for each product and canonicalising to that (which will have internal links) instead of to the version with no parameters? Could that work? Might nudge towards the canonical being respected
| willcritchlow0 -
Most important things for seo a travel website
we use moz for seo on travel agency. https://bit.ly/2RdQzNA thanks for good article 2RdQzNA
| medipars0 -
Site moved. Unable to index page : Noindex detected in robots meta tag?!
That's hugely likely to have had an impact. No-indexing pages before they were ready was a mistake, but the much bigger mistake was releasing the site early before it was 'ready'. The site should only have been set live and released once ALL pages were ported to the new staging environment Also, if all pages weren't yet live on the staging environment - how can the person looking at staging / the old site, have done all the 301 redirects properly? When you no-index URLs you kill their SEO authority (dead). Often it never fully recovers and has to be restarted from scratch. In essence, a 301 to a no-indexed URL is moving the SEO authority from the old page into 'nowhere' (cyber oblivion) The key learning is, don't set a half ready site live and finish development there. WAIT until you are ready, then perform your SEO / architectural / redirect maneuvering Even if you hadn't no-indexed those new URLs, Google checks to see if the content on the old and new URLs is similar (think Boolean string similarity, in machine terms) before 'allowing' the SEO authority from the old URL to flow to the new one. If the content isn't basically the same, Google expects the pages to 'start over' and 're prove themselves'. Why? Well you tell me why a new page with different content, should benefit from the links of an old URL which was different - when the webmasters who linked to that old URL, may well not choose to link to the new one Even if you hadn't no-indexed those new URLs, because they were incomplete their content was probably holding content (radically different from the content of the old URLs, on the old site) - it's extremely likely that even without the no-index tags, it still would have fallen flat on its face In the end, your best course of actions is finish all the content, make sure the 301s are actually accurate (which by the sounds of it many of them won't be), lift the no-index tags, request re-indexation. If you are very, very lucky some of the SEO juice from the old URLs will still exist and the new URLs will get some shreds of authority through (which is better than nothing). In reality though the pooch is already screwed by this point
| effectdigital0 -
404 vs 410 Across Search Engines
Thank you both. Upon further digging in Bing's documentation they do state a 404 or 410 can be used.
| sb10300 -
Ecommerce store on subdomain - danger of keyword cannibalization?
I posted a bit of a Reddit rant here under my personal SEO alias of "studiumcirclus": https://www.reddit.com/r/SEO/comments/bgqelg/changing_web_host_will_it_affect_google_rank/em1m0cg/?context=3 (click "View Entire Discussion") Mainly these things vex me about the platform: "In basic terms, Shopify is limited by its vision. They want to make sites easy to design for the average-joe, which means they have to spend most of their platform dev time on the back-end of the system and not the front-end of the sites which it produces _ If they're always bogged down making extra tick-boxes to change things in the back-end, how can they be keeping up with cutting edge SEO? With WordPress you have a much larger dev community making add-ons, many of them completely free and still very effective. Because everyone is on WP, when new Google features, directives or initiatives come out they are quickly embraced (putting all sites on WP one step ahead)_ _ With smaller dev communities, platforms like Shopify or Magento lag behind. Why do people always expect that 'average' will rank 'well'? Ahead of the curve ranks well, average ranks averagely_ _ Also Shopify has some nasty Page-Speed issues which they won't acknowledge and they just argue about instead of fixing things. It's just not good for SEO_" Other "Shopify is bad" evidence: https://moz.com/community/q/main-menu-duplication#reply_391855 - just contains some of my thoughts on why Shopify isn't that good https://moz.com/community/q/site-crawl-status-code-430 - a relatively recent problem someone had with their Shopify site, scroll down to see my reply https://moz.com/community/q/duplicate-content-in-shopify-subsequent-pages-in-collections - someone else having tech issues with their Shopify site. While my answer was probably right, they probably couldn't implement the fixes
| effectdigital0 -
Link to webdesign bureau in footer on follow or nofollow
Test it. No-follow a portion of the links (if you have designed hundreds of sites, maybe try 10%). See if your results go up, stay the same or go down. If your results go down, remove the no-follow tags again. Even if the results don't instantly come back (and Google keeps the no-follow reference, even when the coding is removed) you won't have lost much as you kept your sample small SEMRush and Moz Toxicity ratings rely a little too much on linguistic, 'semantic' relevance (e.g: "a link from a car manufacturer to a car insurance site is relevant as they're both about cars"). Deeper relevance (that which Google is actually looking for) is more to do with "why is it relevant for the user to click on this link?" Toxicity scores may simply be a reflection that you have loads of links from sites which are 'thematically' irrelevant. But that doesn't necessarily make the links themselves irrelevant! It may well be useful for people to Google the sites you made, think they are cool and wonder who designed them The truth is neither SEMRush nor Moz knows exactly what Google thinks a good / bad link is. They basically look for patterns in backlink profiles and linking sites, which have been involved in penalties which did occur on Google (which they know from their keyword / ranking indexation data). If suddenly 2-3 sites drop out of the rankings and they all shared similar backlinks, SEMRush and Moz can estimate that those linking sites (under certain circumstances) may be bad But it's not a 100% guarantee, indeed if you disavow or no-follow loads of links based on these ratings alone - you often do see little performance dips (without doing more forensic, more holistic research) If your main concern is that 'site-wide linking' may be negatively affecting you, there could be a simple cure for that. Your idea of producing case studies on your own site is great, but it stops you getting free traffic and leads from sites that you designed - if those sites stop helping you rank well, or stop linking to you Instead you could create new pages on the clients sites. Yeah seems crazy but hear me out as I have some logic behind this which might create a good compromise, which would be very interesting to test In the footer on your client's sites, leave a link saying "Webdesign by Conversal". When users click that link, instead of taking them directly to your own site, you could point that link to a page on the client's site with some design sketches, a bit of blurb about how your approached the project. THAT page (on your client's site) could then link to you directly. In this way, you'd only get ONE link from each site, but the footer link would remain (though it would become an internal link) and continue to serve you. Maybe this could be a decent solution, but I've never tested it (sorry) The links from these pages on your client's sites (accessible only from the footer links) could connect with the case study URLs on your own site, creating a unified experience which leads people down a funnel - to buying a site design from you I might try that on a few sample clients, monitor the results. If the results didn't drop I'd at least feel better insulated against Penguin, and would probably then roll out another batch
| effectdigital0 -
Same subcategory in different main categories
I would have no issue with using rel canonical links in this kind of situation where you cannot control the underlying CMS to the extent that you would need to entirely avoid the duplicate URLs. The only real risk in my opinion is in the canonicalisation not being respected, but if these are essentially exact duplicate pages, I think the risk of that is low (and even if that were the case, the impact would be relatively low too). Good luck!
| willcritchlow2 -
301 vs Canonical - With A Side of Partial URL Rewrite and Google URL Parameters-OH MY
Thanks for the solid advice, I really didn't know what to do. Your explanation of canonical and 301 and how they really work was clear and very helpful. Thank you for your response!
| TStorm1 -
Upper and lower case URLS coming up as duplicate content
Ta One element to check is causation of why the duplicates. Solve same as otherwise it maybe an ongoing problem. Good luck.
| ClaytonJ1 -
Importance (or lack of) Meta keywords tags and Tags in Drupal
Meta keywords haven't been used for over a decade
| jasongmcmahon1 -
Google webcache of product page redirects back to product page
Only time I have encountered stuff like this is when Google cached a URL as it was in the middle of being redirected. For example on a big eCommerce store, it's common for URLs to redirect sometimes if the product is unavailable or out of stock, but then when they come back in stock they go back to 200 (OK). It's possible that some dev issue occurred, or that a product was temporarily experiencing normal redirect behavior - and Google happened to re-cache at that specific moment If the WayBack Machine (Google it) has a backup of the page from the same day (or very close) you might be able to see the same behavior there to verify. Another possibility is that for some reason, Google's 'Googlebot' user-agent is being redirected on product pages or just their caching bot, a defense to 'stop' Google from caching their URLs (which some might argue can result in complications, e.g: if they accidentally put up a product listing with false info and corrected it later - if the old version were still cached, a user could 'prove' that they were missold on something - so some sites take measures to mess with Google's caching) Try accessing the pages with the "Googlebot" user-agent and see what happens. Try this Chrome plugin, make sure to clear your cache and stuff before attempting to connect. It could always be a temporary Google glitch, but it's wise to explore at least a few possible avenues before reaching such a conclusion
| effectdigital1 -
E-commerce site, one product multiple categories best practice
Hi, This topic is quite old, but is still relevant. I understand that the solution mentioned above is the most thorough one. But is there something wrong with just using canonicals? In a webshop that we are managing, there are just a couple of subcategories that belong to different categories. An example: example.com/legal/economic-law/company-law example.com/tax/companies/company-law Only these two URL's will generate duplicate content, since the categories above 'Company law' ('Economic law' and 'Companies') clearly have different content. Can't you just pick one version as the canonical one? Since we have just a couple of these categories, this is an easier solution. Thanks for your feedback guys!
| Mat_C0 -
Is there a way to forward banklink benefits from one domain to another without a redirect?
In this case where I'm unable to do any sort of 301 is there any other in-page options that might be a reliable way to forward link equity? The other option is that I keep pressing to change the domain of the login page to a subdomain of the marketing site, which is unlikely at this point, but even in that case the subdomain would cause issues with link equity correct?
| OCN0 -
301 Old domain with HTTPS to new domain with HTTPS
So i figured out another way. I did a 301 at the registrar level directly to the https. this ensures every version from that domain (http, https, www or non www) all go to the same URL with https without 2 or 3 redirects.
| waqid0 -
Does Google ignore content styled with 'display:none'?
Thank you, that's kind of what I thought.
| rastellop0 -
Schema markup concerning category pages on an ecommerce site
Here's what I came up with: Validator Result. Note that there's no guarantee this will pass muster with Google since they've said that the Product schema is for use on "a product page that describes a single product" Here's an official response.
| SkunkworksCreativeGroup1 -
Google spending majority of time on NAV bar vs. most important pages
Sounds like you are missing a sitemap or the sitemap doesn't have priority set at folder/page level
| DarinPirkey0 -
Why do SEO agencies ask for access to our Google Search Console and Google Tag Manager?
GTM allows a great deal of tracking capabilities without having to use a programmer to modify your code. Better tracking equals the ability to make better decisions and determine what is and isn't work. Google Search Console keeps an eye on the "heartbeat" of your website and helps you see any issues that Google may have with the technical aspect of your website. Both free and both great tools. The better question to ask would be, "What should I do if my agency DOESN'T ask for or use these tools?" If an agency isn't tracking performance and using that data to continually improve, I'd be questioning the agency.
| triveraseo0 -
How to carry across/capture linkjuice during an SEO site migration
This is exactly the right answer! Also remember that unless the content at the redirect origin and destination URLs is similar, Google may decide not to transfer the SEO authority across. So if you looked at the last active iteration of the old URL which 'earned' the SEO authority (and links), and you compared the content (via something like a Boolean string similarity tool) to the new URL - and the read-out wasn't good, you might lose a little (or all of) your SEO authority The best thing to do is actually get the backlinks amended, if you possibly can - as this circumvents the whole problem. That being said it can be time consuming to do link amends and what you actually get back can be iffy (some webmasters can even get annoyed if not approached correctly)
| effectdigital0