Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Latest Questions

Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!


  • Hello.  We'll need to see what it's telling you so we can help.  Can you share a screenshot of the issue?  We'll be happy to look at it and give advice. Just an FYI, just because a canonical tag exists on the page, doesn't mean there aren't issues.  So once we see what the issue is telling you, we can help you figure it out.

    Moz Local | | DarinPirkey
    0

  • Hi there, Others have already mentioned great starting points. The steps I would take here in order would look something like: Confirm that I'm looking at the right data, and identify where the drop is coming from. Is it mostly the blog? A certain section? Top performing pages? Homepage? Or is it more of an all around drop to most pages? Once you answer the first question, it will allow you to prioritize where to look. If you identify most of the drop happened on your blog for example, you can focus your attention there to answer questions like, "Have we changed anything on the blog recently from a technical perspective?", "Are we confident there is nothing that is stopping us from ranking technically?", etc. The more difficult situation is when there is a general decrease in traffic to most pages on the site (doesn't seem to be a rhyme or a reason). In these cases I would look back at what updates have affected the site before to see a trend. When did the drop happen? Does that drop line up with a recent update? If so, what was that update about? Then some form of a technical audit, content audit, etc would be reasonable next steps to essentially identify the biggest issues in each. Some other points in no particular order Remember that more content/pages does not necessarily mean more traffic. Because your formula was working a year ago doesn't mean it will work now (or work as well). Looking at your site briefly, you have a good amount of content (~17,000 pages) but it's difficult to navigate and find articles. If I had to guess, I'd say the site could benefit from a re-design and content overhaul/audit to ensure there are not too many overlapping pages and that they are easily discoverable by users (and crawlers). I know that's a bit of a brain dump but hope that (along with other response) helps point you to a starting point!

    Content & Blogging | | sergeystefoglo
    1

  • Hi, Thanks a TON for all the analysis and insights. Just mind blowing info. Unfortunately we switched to different versions of the site and the recent one will be stable for years and further changes will be handled very carefully without complete transformation. Our open source crm  page dropped from April this year; but the link from capterra was removed in 2018 only. They removed our product from the list and they no more link directly to the websites (you can see the page now). Not sure why we lost traffic for this page all of a sudden even though there is no much ranking difference for main keywords of high search volume. We are going to investigate this and bring back the page to the normal traffic. Yes, we are trying to rank for "crm" as primary keyword. Do you think that we are not doing well for "crm" as we dropped for "open source crm" page? Thanks

    Search Engine Trends | | vtmoz
    0

  • Hi there - Sam from Moz's Help Team here! I'm sorry to hear you've had some difficulty adding photos to your listing. The photos the tool can accept have to be hosted somewhere other than Moz, and the URL has to be publicly accessible. If you try to use a photo from a Facebook or Google Photo album, for example, you might encounter complications because those images are often protected behind your login. It's generally a good sign when a photo URL ends with a image file suffix, such as '.png' or '.jpg'. For example, you can use a free photo-hosting site, such as this one. You can upload your image here and then get the full URL and then that's the URL you'd post into Moz Local. Here's a short video demonstrating uploading a photo on this site and getting the full URL. Something else to note is that photo URLs cannot be too long because there is a character limit to each field. Try to make sure the photo URLs you use are publicly accessible and not too long, and you should be good to go! If there's anything else you need, just let us know by reaching out to help@moz.com

    Moz Local | | samantha.chapman
    2

  • Only time I have encountered stuff like this is when Google cached a URL as it was in the middle of being redirected. For example on a big eCommerce store, it's common for URLs to redirect sometimes if the product is unavailable or out of stock, but then when they come back in stock they go back to 200 (OK). It's possible that some dev issue occurred, or that a product was temporarily experiencing normal redirect behavior - and Google happened to re-cache at that specific moment If the WayBack Machine (Google it) has a backup of the page from the same day (or very close) you might be able to see the same behavior there to verify. Another possibility is that for some reason, Google's 'Googlebot' user-agent is being redirected on product pages or just their caching bot, a defense to 'stop' Google from caching their URLs (which some might argue can result in complications, e.g: if they accidentally put up a product listing with false info and corrected it later - if the old version were still cached, a user could 'prove' that they were missold on something - so some sites take measures to mess with Google's caching) Try accessing the pages with the "Googlebot" user-agent and see what happens. Try this Chrome plugin, make sure to clear your cache and stuff before attempting to connect. It could always be a temporary Google glitch, but it's wise to explore at least a few possible avenues before reaching such a conclusion

    Intermediate & Advanced SEO | | effectdigital
    1

  • Screaming Frog is good for crawling an existing Sitemap.xml file and can indeed produce Sitemap.xml files, but if our site is medium-sized (thousands of URLs) then really you'd want a dynamic one. Pretty sure the Yoast SEO plugin for WordPress has this built in with some tweak options and variables, probably start there With Screaming Frog you'd have to keep manually re-building your sitemap XML / XML index file. Sounds pointless, boring and tedious when relatively stable dynamic options exist

    Technical SEO Issues | | effectdigital
    2

  • Hi, Can you give me clear explanation with one correct canonical example that how to use

    Feature Requests | | prasad.nueve
    1

  • I wouldn't say there would be massive chances of a penalty here, that being said it's an area where you could be 'adding value' and uniqueness to your pages and you're not doing it. So your pages may be 'less competitive' and you may be missing out on an opportunity. It's more of a competitive missed opportunity than an 'error' per-se In reality you should have one product page for each product and then just have 'product variants' for stuff like quantity, size, colour etc. On the modern web people find this easier to navigate and since many sites do offer that, they might seem like more competitive places to shop for paint cans than your site. Price does matter, but it's not the sole arbiter of how products are ranked on Google's search engine - other stuff matters too. Unless you have a virtual monopoly on the product (only you can sell it, or only you can sell it at a greatly discounted price due to a special relationship with the supplier) then I would consider the UX and design of your site. No one wants an 'arse-ache' of a browsing experience Many tools will flag what you are about to do as duplicate content and they're technically right. But instead of going on some crazy copy-writing crusade, think about the architecture of your site. You can still have separate URLs for different product variations if you want, even via parameter-variables (though that's a bit of a 'basic' implementation). If you make it clear to Google through new, more streamlined architecture that they're all actually the same product, the duplicate description(s) won't matter 'as much' (though they'll still be a missed opportunity for more diverse rankings IMO) You can make it even more apparent to Google that all the different variations are actually the 'same product' by utilising Product schema and some of the deeper stuff like ProductModel which will bind it all together. Whatever you implement, test it here. If this tool throws errors and warnings, keep working away until they're all fixed Canonical tags are another option but they will decrease your ranking 'footprint' and in this case I wouldn't recommend them, despite 'slight' content duplication risk (which in reality, are mostly negligible) Final note: you say you have 'unique' descriptions, but remember if they're used elsewhere online they're not unique. If they're unique internally that's great, but if you got them all from a supplier then... obviously loads of other sites are probably using them, which could easily be a big issue for you

    Technical SEO Issues | | effectdigital
    2

  • You raise valid concerns here and the truth is, it may not be hreflang related - but before we look at anything else, you do technically have a lang / hreflang conflict Look at this example: view-source:https://mediabrosonline.com/en/ (view source links only open in Chrome) Here's your self-referencing hreflang: rel="alternate" hreflang="en" href="https://mediabrosonline.com/en/" /> Here's your lang tag: lang="en-US" Your hreflang says the page is EN international (for all EN users) but your language tag says the page is only for EN speaking users geographically located within the US. So which is it? Confusing for Google Let's look at an example where the site 'does it right': view-source:https://mediabrosonline.com/mx/ (view source links only open in Chrome) Here's your self-referencing hreflang: rel="alternate" hreflang="es-mx" href="https://mediabrosonline.com/mx/" /> Here's your lang tag: lang="es-MX" See! They correctly match. So this shows that on the EN page, implementation is technically wrong. I know, I know - I am really 'splitting hairs' here. But before we look at other factors, let's make your original statement: "I have the correct hreflang tags" ... actually true! That way we can rule it out

    International Issues | | effectdigital
    1

  • Hi thanks for the reply. These are the following i have done to my site and have kept these changes as so for close to 5 months that i had acquired the site for. 1. I turned off robot crawling for the entire site 2. I only have 2 pages which are privacy policy and home page (because the website is going under construction so i havne't added more pages) 3. I don't have any contact info or phone number listed on the site. So these changes i have mentioned line up to what you have mentioned and makes better sense now. So once i launch the website and everything is flushed out just the way i need it, should i expect the spam score to reset itself to 0% then? Thanks.

    Moz Tools | | Nor123
    0

  • First of all Google Search Console can show you Crawled Pages and **Indexed Pages. **Google follows three basic steps to generate results from web pages: Crawling Indexing Serving (and ranking) Crawling: The first step is finding out what pages exist on the web. There isn't a central registry of all web pages, so Google must constantly search for new pages and add them to its list of known pages. This process of discovery is called crawling. Indexing: After a page is discovered, Google tries to understand what the page is about. This process is called indexing. Google analyzes the content of the page, catalogs images and video files embedded on the page, and otherwise tries to understand the page. Serving: When a user types a query, Google tries to find the most relevant answer from its index based on many factors. Google tries to determine the highest quality answers, and factor in other considerations that will provide the best user experience and most appropriate answer, by considering things such as the user's location, language, and device In summary, no all your pages at least not all the pages in your Search Console will be available on SERPs

    Moz Pro | | Roman-Delcarmen
    1

  • Two main options: Edit your template so that for additional pages it just adds something like "P2" or "Page 2" to the page title. This is the preferred option Block Rogerbot from crawling paginated content (https://moz.com/community/q/prevent-rodger-bot-for-crwaling-pagination) - this however, would block Rogerbot (Moz's crawler) from identifying other issues you might have with your paginated content / URLs

    Technical SEO Issues | | effectdigital
    0

  • "most provincial level domains are reserved for government institutions" - I didn't know this, very interesting bit of info there! It would be very hard to say if they had been definitively hindered but IMO it's seeming more and more likely

    Web Design | | effectdigital
    1

  • With the caveat that I'm not an expert in the affiliate space, the advice I have typically seen given in these situations is to put all the targets of the links on your site into a folder like /outbound/ and then block that entire folder in robots.txt so that the search engines don't crawl those links / don't see the 301s. I wouldn't have thought that would be sufficient to think that they don't realise that you are running an affiliate model, but there's nothing wrong with that business model per se. As far as linking out and sending people off to another site a lot, no, that sounds like the right user experience in this situation, and I can't think of any other way of achieving what you are trying to do. Good luck.

    Technical SEO Issues | | willcritchlow
    1

  • I agree with Effectdigital - best method is to go to the Acquisition section and look at the data by source and medium - as well as confirming whether you are getting organic traffic, it means you can confirm where you are getting traffic from if it isn't from Google. In terms of your keywords question I couldn't say for certain why those tools aren't returning keywords but what do you see if you load your site with JavaScript switched off? Sometimes using JavaScript reliant sites can mean that tools like the ones you describe can't quickly pull content to get suggestions. Couple that with not ranking for terms that these tools may have already picked up and that could lead to what you're seeing. For what it's worth if that is the cause I'd consider server side rendering - the easier you can make it for machines to read your content, the better. Hope that helps.

    Search Engine Trends | | R0bin_L0rd
    1