Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Latest Questions

Have an SEO question? Search our Q&A forum for an answer; if not found, use your Moz Pro subscription to ask our incredible community of SEOs for help!


  • Thanks everyone for responses, although it's not been figured out. To recreate the issue (which still exists), use Bright Locals local search results checker: Choose "dermatologist" in zip code 10006, which is the location of the practice:https://www.brightlocal.com/local-search-results-checker/Then open up the Maps results to see all of the dermatologists listed. My client Wall Street Dermatology isn't appearing at all under "dermatologist" which suggests some kind of suppression that we're trying to get to the bottom of.

    Local Listings | | sponnu0123
    0

  • The main reason it's not good is that Google crawl from different data-centers around the world. So one day they may think the site is up, then the next they may think the site is gone and down Typically you use a user-agent lance to pierce these kinds of setups. Screaming Frog for example, you can pre-select from a variety of user-agents (including 'googlebot' and Chrome) but you can also author or write your own user-agent Write a long one that looks like an encryption key. Tell your client the user agent you have defined, let them create and exemption for it within their spam-defense system. Insert the user-agent (which no one else has or uses) into Screaming Frog, use it to allow the crawler to pierce the defense grid Typically you would want to exempt 'Googlebot' (as a user agent) from these defense systems, but it comes with a risk. Anyone with basic scripting knowledge or who knows how to install Chrome extensions, can alter the user-agent of their script (or web browser, it's under the user's control) with ease and it is widely known that many sites make an exception for 'Googlebot' - thus it becomes a common vulnerability. For example, lots of publishers create URLs which Google can access and index, yet if you are a bog standard user they ask you to turn off ad-blockers or pay a fee Download the Chrome User-Agent extension, set your user-agent to "googlebot" and sail right through. Not ideal from a defense perspective For this reason I have often wished (and I am really hoping someone from Google might be reading) that in Search Console, you could tell Google a custom user-agent string and give it to them. You could then exempt that, safe in the knowledge that no one else knows it, and Google would use your own custom string to identify themselves when accessing your site and content. Then everyone could be safe, indexable and happy We're not there yet

    Technical SEO Issues | | effectdigital
    0

  • Yes, it is the main non-recommended architectural layout by Google Google's official guidance is here: https://support.google.com/webmasters/answer/182192?hl=en ... scroll down until you find a table. The closest architecture to your described architecture, is the parameter-based one which Google explicitly does not recommend. If Google wouldn't recommend that; without even the shallow parameter-based signifiers, there's little hope that anything would go well I have seen a lot of sites that try to serve different language content from the same URLs and they very rarely do well or perform at all in modern times Read Google's advice and pick from one of their pre-defined, recommended options. Whilst translation plugins can be cheap and useful, they're usually awful for SEO

    Intermediate & Advanced SEO | | effectdigital
    0

  • If you want to see it in the screenshot you gave me, I would imagine you would need a specific Google Analytics for the TLD. If that's not possible, you can also try to check out this article which I found (It's not by me, but when I searched your query this is what came out). https://blog.achille.name/web-analyitics-en/google-analytics-filter-extracting-tlds-keywords-adwords/#.XTreYRJKiL8 Aside from that, I'm not sure what else could work if you want to see how many views per TLD. If you want a full dashboard on your TLD, the method above might work although it is dated.

    Intermediate & Advanced SEO | | CJolicoeur
    0

  • I personally favour a) Nerdybet.com/gambling-sites-directory/stateX I find it more descriptive & I can add keywords to the URLsy Nerdybet.com/gambling-sites-directory/stateX/CityName

    On-Page / Site Optimization | | jasongmcmahon
    1

  • You'll find what you need here: https://moz.com/community/q/local-keyword-search-volume As they point out, you should use Google Ads Keyword Planner. Best of luck.

    Other Research Tools | | llevy
    0

  • I agree with you, in that most people wouldn't want to read 30 articles. All articles are related to consumer loans, but they vary when it comes to the "sub-subject" if I can use that term. I think I'll have to refine these silos to a more granular level. Been thinking of only putting the best moneypages together in one block, and then pick new tier layers according to importance and visitor stats. Thanks for the input.

    Technical SEO Issues | | llevy
    0

  • This is right. The 302 keeps the SEO authority on the URL, though if you take the piss with it (leave 302s up for months on end) then they will go bad and degrade (meaning that even when you remove the 302, it won't restore SEO authority to the old pages) Since the first 302 retains the SEO authority on the first URL, there's no SEO authority for the second 302 to mess up or cause problems with. It's bad UX but little else in reality If OP is planning to redirect permanently and leave the 302 up forever, then obviously that will kill the SEO authority of the old page (dead) as it will retain for a while on the old URL before 'dying' OP may wish to consider whether the old page will be coming back or not and how long that might take. If it's going to be gone for many months (closing on a year) or multiple years, or is never coming back, alter the 302s to a single 301. If the page will be back in a few days, it's fine as it is just don't forget to remove the 302s later

    Intermediate & Advanced SEO | | effectdigital
    1

  • Not a problem! It't great that Moz's crawler picked up on this issue as it could have caused some problems over time, if it were allowed to get out of control

    Moz Tools | | effectdigital
    0

  • If your traffic is redirecting on your new domain from the indexed old domain, that's fine. However, if you're worried about duplicate content because the old site has the same content as the new site, that can be a problem if the same page is being indexed on two different domains. You can tell Google that your domain has changed in the old search console. 1. Make sure both domains are setup in Google Search Console. 2. Set up a permanent redirect on the old site's .htaccess file, redirecting every page to the new site. 3. Search "Google change of address" in Google. The first link above should be from Google Support. Click on that and they will give you a link to the old search console. 4. In the old search console, go to the gear on the far right and click on that. You will see "change of address". 5. Be patient. It can take months before those old URLs are gone from Google.

    Local Website Optimization | | CJolicoeur
    0

  • Personally it's something that I would nip in the bud, with 301 redirects. But if you are going to do that, make sure execution is flawless or you'll end up with problems

    Technical SEO Issues | | effectdigital
    0

  • They can sometimes be harmful yeah. Disavow the domain in Google's disavow tool.  Remember to download the existing disavow file and add your new entries on, otherwise you might undo some previous work. The file you upload doesn't get 'added' to what you have submitted previously, whatever you upload IS the complete file (be wary) Other than that I'd just listen for any traffic from the domain and 301 redirect it somewhere else. The problem you'll get is that if you 301 it back to them, their redirect will pass back to you and you'll get an inter-domain redirect loop I don't know what the consequences of that are. You could just 301 redirect traffic and negative equity from that site, to someone you don't like no? Ok, maybe a bit too volatile and thermonuclear In all seriousness the best thing to do is disavow and code your server to refuse to serve anything to any requests processed from that domain (be they crawlers or users). Just shut it down and disavow it, that's what I'd do. Redirect-wars are seldom beneficial

    Intermediate & Advanced SEO | | effectdigital
    1

  • In addition to Jose response, good old Blogger outreach. Come up with a gameplan focused on what subject matter, keywords & DAs you're after, them either you approach the bloggers yourself, or you pay a 3rd party to procure the links for you. The later is the most common approach as link building can be very timeconsuming. As with most thing SEO, Moz has an article on this matter https://moz.com/blog/blogger-outreach-for-your-clients. Also, if you have a Moz Pro account & campaigns set up & running then there is the Opportunities feature accessible via the campi=aign dashboard.

    Intermediate & Advanced SEO | | jasongmcmahon
    1

  • If they are exactly the same listings in exactly the same order then yes, you probably don't need both of those URLs. I'd go back to the architecture and try and work out why so many duplicate URLs were created, what the logic on that is, fix it from the foundation. Messing around with tags that Google ignore half the time is seldom the answer. It 'seems' simple, but in reality doesn't usually properly fix the main issues. Canonical tags for example, do not consolidate backlink authority properly. 301s are an option but then it's like, why have I created a whole shadow section that just 301s to another section? By that point you begin to realise the ridiculousness of the structure and think about fixing it properly

    Technical SEO Issues | | effectdigital
    0

  • In general, Google cares only about cloaking in the sense of treating their crawler differently to human visitors - it's not a problem to treat them differently to other crawlers. So: if you are tracking the "2 pages visited" using cookies (which I assume you must be? there is no other reliable way to know the 2nd request is from the same user without cookies?) then you can treat googlebot exactly the same as human users - every request is stateless (without cookies) and so googlebot will be able to crawl. You can then treat non-googlebot scrapers more strictly, and rate limit / throttle / deny them as you wish. I think that if real human users get at least one "free" visit, then you are probably OK - but you may want to consider not showing the recaptcha to real human users coming from google (but you could find yourself in an arms race with the scrapers pretending to be human visitors from google). In general, I would expect that if it's a recaptcha ("prove you are human") step rather than a paywall / registration wall, you will likely be OK in the situation where: Googlebot is never shown the recaptcha Other scrapers are aggressively blocked Human visitors get at least one page without a recaptcha wall Human visitors can visit more pages after completing a recaptcha (but without paying / registering) Hope that all helps. Good luck!

    Intermediate & Advanced SEO | | willcritchlow
    1

  • Other stuff (since I was able to reproduce exactly with a Romanian proxy): https://d.pr/i/sSkF9X.png (screenshot) So above you can see some links boxed in green, which have properly updated URLs (HTTPS only no WWW prefix) whereas the entries in red contain links which still contain the WWW protocol (incorrect, not updated) I can see that the GMB (Google My Business) listing is still linking to a very old version of the URL (HTTP WWW, so wrong protocol and prefix) - updating that might also be a positive signal to Google which could help I notice that the redirect (sometimes) doesn't go to OPs homepage, it goes to a child-variant of the homepage which contains parameters assumedly for tracking purposes (e.g: "https://probike.ro/?SID=nn565sjakv33nk6h2haenvbr7k"). The thing is, it's not (always) going 'straight' to the 'clean' version of OPs homepage (sometimes it does, sometimes not), and Google can sometimes be slightly adverse to indexing and listing parameter-based child URLs (unless they significantly alter content in a truly useful way, which this does not) Check out this video which shows it working perfectly as it should do, in Firefox: https://d.pr/v/v3lIiS (video) Looks fine right? But when I try in Chrome: https://d.pr/v/IABstn (video) ... just so you know, I have sometimes had the redirect work fine in Chrome and at other times I have seen the failure in Firefox, so it's not browser specific. I think it actually has more to do with session data or cookies, as I can usually reproduce the issue when I clear all browsing data, but every time I try to repeat it after that it's less likely to happen (in series) If Googlebot is following the 301 to some weird parameter URL instead of the true homepage that could be why Google is taking SO long to update this

    Technical SEO Issues | | effectdigital
    0

  • How the spam score works on blog type website? Some of my pages and blog post have hight spam score then main domain.

    Intermediate & Advanced SEO | | Sjani11
    1

  • Also OP shouldn't forget to use change of address tool as likely they have subdomain and main domain listed as separate properties in GSC

    Intermediate & Advanced SEO | | effectdigital
    1