Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Latest Questions

Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!


  • Hi there! Thank you so much for the great question and sorry for the confusion here! Moz's Spam Score is not calculated based on the Spam Scores of your inbound links but rather, it is the percentage of sites we have found to have been penalized or banned by Google which have similar on-site features. If you're looking to improve your Spam Score on Moz, I would recommend checking out the 27 common features we identified through our machine learning model which make up this score. Working to improve those features on your site will help to improve your Spam Score. I also have a guide here from our Help Hub which talks more about Spam Score, the common features, and how to use this metric. https://moz.com/help/link-explorer/link-building/spam-score I hope this helps! If you have follow up questions, you can always email us at help@moz.com.

    Moz Tools | | meghanpahinui
    0

  • Hard to tell what to do without seeing specifically where this error is coming up, but it usually has something to do with font sizes being too small on mobile devices. Mobiles have much more densely packed pixels on their screens (images are sharper, the pixels are smaller and more numerous). This means that 12 pixels (font-size) on desktop can look like 5 or 6 pixels on mobile devices (or even smaller) This is why, with responsive design, many people don't specify a pixel-based font-size. They'll make the font size relative (rather than absolute) somehow, be that by calculating pixel width against screen width as some percentage, or by working with newer font-sizing specifications (there are many, and many ways to use them). It's all about inflating font-sizes on devices with more densely packed pixel screens, so that the fonts don't come out looking minuscule Sometimes you can get errors where, even though the site's design is responsive, as someone was writing text in the CMS text editor - it appended styling info including the px font-size, which is then inlined (overruling all your perfectly thought out font-sizing CSS rules) If it's not mobile related at all, it's likely that the font is just generally too small

    Technical SEO Issues | | effectdigital
    2

  • Thanks so much for your input and advice! Your insight is appreciated and I hadn't thought about it as technically as you did.

    Other Research Tools | | LBoxerger
    0

  • The normal way to handle internationalization is to have separate Geo/Language subfolders, and potentially redirect users based upon IP address, or prompt them to switch to the appropriate language or country if they want. For example, a US-based publisher with separate UK content might do this: domain.com/news domain.com/uk/news Is there a reason you want to keep the URL the same while doing multilingual? Google's overview of methods and why they're good or bad is a good starting point: https://support.google.com/webmasters/answer/182192?hl=en

    Intermediate & Advanced SEO | | KaneJamison
    0

  • I would do a little trouble shooting to see what some causes are. Check Dev Tools on Google Chrome to ensure you don't have "Disable cache" checked. Check your Robots.txt file to ensure that you aren't blocking Google (look for something like "user-agent: googlebot Disallow Look at Search Console for manual actions.  Go to "Security & Manual Actions" > Manual actions Let us know what you find and we can go from there.

    Search Engine Trends | | DarinPirkey
    0

  • Hi, We have answered your question via the email you submitted to help@moz.com Best, Eli

    Link Explorer | | eli.myers
    1

  • Hi  brandonegroup - Have you thought about posting this question in the Magento forums? Christy

    Intermediate & Advanced SEO | | Christy-Correll
    0

  • I think it would be something like: <code>RewriteEngine On RewriteCond %{HTTPS} off RewriteRule .* https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteCond %{HTTP_HOST} !^www\. [NC] RewriteRule .* https://www.%{HTTP_HOST}%{REQUEST_URI} [L,R=301] This would go in your .htaccess file, you should be sure to edit the file via FTP so that if you break it, you can restore the old version and fix the site in seconds (keep a backup of the old .htaccess file) If you are on a windows-based server you'd need the equivalent rule for your web.config file as IIS doesn't use .htaccess For most sites, you have a .htaccess file and it sits in the root of your FTP (the same place that your primary index.php file lives, where robots.txt and sitemap.xml also usually reside)</code>

    Intermediate & Advanced SEO | | effectdigital
    0

  • This is a great answer. From the sounds of it OP has: A scraper site gone wrong, creating malformed links to your site Some kind of shady negative SEO attack trying to create garbage URLs on your site A data pollution attack trying to mess up your analytics A hack-attack Any mixture of the above If a site has been hacked, sometimes it can take some proper dev work to pull it out at the roots. A hacked site is a liability to Google and Google don't like to rank hacked sites / content I would suggest checking whether the site has been hacked with some urgency and back everything Gaston has said

    White Hat / Black Hat SEO | | effectdigital
    1

  • Hi, Firstly, yes, that robots.txt is valid and would work for your purpose. There's a great tool (https://technicalseo.com/tools/robots-txt/) that allows you to put in your proposed robots.txt file contents, the URL you want to test and even the robot you want to test against and it lets you know the result.

    Technical SEO Issues | | Xiano
    0

  • Hi Wagada, I've seen this come up before at Google's forum: https://www.en.advertisercommunity.com/t5/Basics-for-Business-Owners/quot-Owner-quot-Q-amp-A-Responses-not-being-displayed/td-p/1675268 Read that one, and then I suggest you post a thread there on this topic and see if there is currently a known bug surrounding this.

    Local Listings | | MiriamEllis
    1

  • Just remember to evaluate performance. With the mega menu links gone, some of those deeper pages may now receive less SEO authority which could sting the long-tail. But the benefit should be, less bleed from higher level pages which connect with higher-tier search terms For different sites, different approaches can be better or worse. Don't be afraid to roll back if it doesn't work as you anticipated!

    On-Page / Site Optimization | | effectdigital
    2

  • What do you mean by "primary domain"? Are you talking Google Search Console or something else? I assume you are not using the other domains other than to redirect? Let me know if the above is true. You can move to yet another domain, but it is advised against as EffectDigital said because everything has to be reindexed again. It's possible to do and they'll be fine in the future, but it is going to hurt for some time.

    Local Strategy | | katemorris
    1

  • So, you basically can't 'force' Google to do anything but there may be better ways to encourage them to remove these URLs The only way to force Google to remove a URL is to use the URL removal tool in Google Search Console but this only removes a page temporarily and it's a pain to do en-masse submissions. As such, not my recommendation One thing to keep in mind. You have loads of pages with no-index directives on, but Google is also blocked frown crawling those pages via robots.txt. So if Google can't crawl the URLs, how can it find the no-index directives you have given? Robots.txt should be used for this - but your chronological deployment is off it's too early. You should put this on at the very, very end when Google has 'gotten the message' and de-indexed most of the URLs (makes sense, yes?) My steps would be: No-index all these URLs either with the HTML or X-Robots (HTTP header) deployment (there are multiple Meta robots deployments, if editing the page-code is gonna be difficult! Read more here) Also deploy noarchive in the same way to stop Google caching the URLs. Also deploy nosnippet to remove the snippets from Google's results for these pages, which will make them less valuable to Google in terms of ranking them For the URLs that you don't want indexed, make the page or screen obviously render content that says the page is not available right now. This one might be tricky for you as you can't do it just for Googlebot, that would be considered cloaking under some circumstances On the pages which you have no-indexed, serve status code 404 to Google only (if it's just a status code, it's not considered cloaking). So for useragent GoogleBot make the HTTP response a 404 on those URLs (temporarily available but coming back). Remember to leave the actual, physical contents of the page the same for both Googlebot and users, though If that doesn't work swap out the 404 (sent only to GoogleBot) with a 410 (status code: gone, not coming back) to be more aggressive. Note that it will then be harder to get Google to re-index these URLs later. Not impossible, but harder (so don't open with this) Once most URLs have been de-indexed and de-cached by Google, put the robots.txt rule(s) back on to stop Google crawling these URLs again Reverse all changes once you want the pages to rank (correct the page's contents, remove nosnippet, noarchive and noindex directives, correct the status code, lift the robots.txt rules etc) Most of this hinges on Google agreeing with and following 'directives'. These aren't hard orders, but the status code alterations in particular should be considered much harder signals Hope that helps

    Local Website Optimization | | effectdigital
    1

  • My pleasure! And you might like to know, I have an article coming out on the Moz Blog on this topic later this month. Stay tuned

    Local Website Optimization | | MiriamEllis
    1