Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Other Research Tools

Find insights and conversations specific to the Research Tools within Moz Pro.


  • Hi James, Thanks for reaching out about your experience with AWS. You're definitely not the only person to have taken this action with blocking AWS and, as we use a dynamic IP, we don't have a way for you to whitelist us. I'll definitely make sure to bring this up again with the product team but, at the moment, I don't think we have any plans to move away from AWS. As regards the tone and content of your message, please make sure to review our community guidelines at https://moz.com/help/guides/moz-procedures/community-guidelines and help us to keep things TAGFEE.

    | LisaHunt
    0

  • Hi there! Thanks for reaching out, and sorry for the trouble! I'm afraid there's not much you can do to your site to prevent this issue - it has to do with our crawler. Unfortunately at this time our crawler isn't compatible with Server Name Indication (SNI), which appears to be in use on your site. In general, SNI is a totally acceptable security configuration, but our crawler simply isn't equipped to be able to handle these settings. The good news is that our team has been working hard on a new crawler that will support this configuration and this crawler is currently in its Beta phase. If you're interested in being a part of the beta testing, the link to the sign up page is here: http://goo.gl/forms/LCvL9Ix8JDHfbAvr1. Once you fill out that form, you will automatically be added into the Beta during the next round. We will be rolling the new crawler out to all of our users once it's out of it's Beta phase, but I don't have an exact timeframe of when that will be at this time, as it's still being actively worked on. In the meantime, I apologize for the inconvenience. If you have more questions or if there's anything else we can do to help, feel free to shoot us a note at help@moz.com and we'll do our best to sort things out for you.

    | tawnycase
    0

  • Hi there! Tawny from Moz's help team here. The best way to prevent our crawler from reporting duplicate content for pages you aren't concerned about and don't intend to change would be to block our crawler from these pages using the robots.txt file for the site. For example, it looks like most of the pages reported as duplicates include URL parameters, so you should be able to add a disallow directive for that parameter and any others to block our crawler from accessing them. It would look something like this: User-agent: Rogerbot Disallow: ?type etc., until you have blocked all of the parameters that may be causing these duplicate content errors. You can also use the wild card user-agent * in order to block all crawlers from those pages, if you prefer. Here is a great resource about the robots.txt file that might be helpful: https://moz.com/learn/seo/robotstxt I'd recommend checking your robots.txt file in this handy Robots Checker Tool once you make changes to avoid any nasty surprises. Let us know if we can help with anything else! Just drop us a line at help@moz.com and we'll do our best to get things straightened out for ya.

    | tawnycase
    0

  • Hi Jesper, Jo here from the Moz support team. As WebBoost has noted our crawler isn't compatible with SNI, but the good news is that our team has been working hard on a new crawler which is in Beta right now and can handle SNI. If you're interested in being a part of the beta testing, the link to the signup page is here: http://goo.gl/forms/LCvL9Ix8JDHfbAvr1. Once you fill out that form, you will automatically be added into the Beta during the next round. We will be rolling the new crawler out to all of our users once it's out of it's Beta phase, but I don't have an exact timeframe of when that will be at this time as it's still being worked on. You're welcome to reach out to us at help@moz.com if you get stuck :] Cheers! Jo

    | jocameron
    0

  • Another excellent question. We're looking for the word "mobile" in the code in a number of languages to help determine whether a page is mobile-friendly or not.

    | tawnycase
    0

  • Smart move. Let me know if I can be of any more help then does this answer your question?

    | BlueprintMarketing
    0

  • I am receiving crawl errors for squarespace website. Can someone please help. Thanks

    | sunelwal
    1

  • -- update: the new Moz Pro site crawler is now live, this means we can now crawl your SNI site. If you have any queries please check this post on fixing 803 errors, or reach out to help@moz.com -- Hey there! Patrick linked to a really helpful Q&A article, but the other big cause for an 803 error would be SNI (Server Name Indication). Unfortunately I don't see the actual domain included in your question, so it's hard to be sure, but that's usually the prime suspect. Our current site crawler does not support SNI and if that is in use on your site, we will have trouble crawling it. The good news is that we have begun to develop a new crawler that supports SNI, which is currently in beta. If you're interested in being a part of the beta testing, you can head here to sign up: http://goo.gl/forms/LCvL9Ix8JDHfbAvr1 I hope that helps! Feel free to reach out to us at help@moz.com if you want to discuss these errors in greater detail on a non-public forum.

    | moz_support
    0

  • Hi there, Lisa from the Moz help team here! I'm sorry to hear about the strangeness you're seeing with this. Unfortunately, you haven't given me much to go on to figure it out for you. Please can you email me at help@moz.com with the details of the URL you were looking at? It would also be helpful to know whether you were using On-Page Grader or if this was within a campaign.

    | LisaHunt
    0

  • Hi there, These look like they might be broken mailto links. If you shoot us an email at help@moz.com, we'll take a look in your campaign and see whether we can pin down the root cause for you.

    | LisaHunt
    0

  • I also have received this warning since upgrading to https. Webmaster tools  shows that no issues apart from a few 404 errors. No indication of an issue when browsing directly

    | jbk365
    0

  • Hey there, Sam from Moz's Help Team here! I'm afraid there isn't currently a way to revert back to the previous method. I'm really sorry about that. We implemented the pagination in the hopes that it would make the experience overall a little easier for our customers and a little more organised. If you do want to see all results at once though, you can always export the CSV of suggestions from Keyword Explorer instead! Let me know if I can help with anything else!

    | samantha.chapman
    0

  • Hey there! Tawny from Moz's Help Team here. I'm afraid I have to be the bearer of bad news - we don't have an API that would allow you to export all your crawl issues from all your campaigns at once. The only way to export that data is to get the Crawl CSV from each campaign individually. Sorry about that! If you've got more questions we can help with, feel free to shoot us a note over at help@moz.com and we'll do what we can to sort things out for ya!

    | tawnycase
    0

  • Thank you again! I think it's a great idea—I actually wanted a paragraph on top but others wanted to try it this way first, they thought a really clean page would be most friendly, but maybe we're so used to knowing what the place is about that we forgot others need some orientation with text! I've been researching the sudden drop in rankings and I don't think we did any of the things that could cause it. I'm hoping it's the Google flux thing and will go away. Guess we'll see. In any case I am very grateful that you commented, was VERY helpful. Amy

    | amybethmegjo
    0

  • Yes, I am looking in my Tracked Keywords which is where I need this information. When will this likely be fixed?  It is making it hard to use the reports at all right now and flicking between other reports (which I may not have full access to) is not really a solution. Keyword difficulty is only helpful in the context of keyword volume.  One without the other does not enable a sensible targeting decision to be made.

    | MrFrisbee
    0

  • That would be great, however I AM the SEO consultant. What tool would you recommend to identify potential bad backlinks?

    | chill986
    0

  • Hi, It is easy, you installed an SSL certificate but did not properly redirected the page, both http & https versions are accessible, thus creating duplicates. You have lost 5 DA points just last week during the latest update. Add this code to your .htaccess page and wait until the end of the month for the next update: RewriteEngine On RewriteCond %{HTTPS} off RewriteRule ^(.*)$ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] Hope it helps!

    | Clotaire-Damy
    0