Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • You generally don't need to take any action on these types of links (you don't need to remove or disavow). Google can see they are just scraped duplicates or a real article and ignore them. But let's say they were harmful bad links (maybe paid links or irrelevant placed sneakily by you - ie: a link to iphones from a page about dogs), then when you remove links it's always a good stop-gap to also disavow. Because Google might not immediately crawl the URLs with bad links right away, but the disavow they will in theory pick up on more quickly.

    | evolvingSEO
    1

  • Hey Hassan, I can't see what you're seeing in GSC, but it looks like your logo is showing up on Google's actual search results. In my experience, GSC is still a little buggy, so if it's working fine in the wild, you're probably safe! Best, Kristina

    | KristinaKledzik
    0

  • It depends on the relationship between the two domains. Either way, as long as the 2 domains are similar/relevant to each other I would definitely suggest leveraging your old domain (A DA of 20 isnt terrible) in 1 of 2 ways: 1. A 301 redirect as you mentioned or 2. keep the old domain active and link to and from it to build up the DA of both sites.

    | RyanMeighan
    0

  • My pleasure, Maureen! I'm so glad you asked about Moz Local. We have a whole new-and-improved Moz Local now, and yes, we are definitely supporting the UK. See more here: https://moz.com/help/moz-local Please write to help@moz.com if you have any questions about what we can do for your clients in the UK, and I'm so glad you found help in the forum.

    | MiriamEllis
    1

  • In response to your question, it really depends on how long his current page has been active and how long it has been indexed by google. If there are links pointing to his current bio, it will stay active in the SERPs longer. Overtime the original page will not show in the SERP's but it will be replaced with the new page with his middle initial. It is always better to have more information than not. It is just like long tail keywords. If you type his full first middle and last name into search,  he will most likely rank for all three queries as long as his domain has relevant authority. I hope this helps!

    | Colemckeon
    0

  • thanks a lot . this is so helpfull

    | dannybaldwin
    0

  • Really interested to see that others have been receiving this too, we have been having this flagged on a couple of sites / accounts over the past month or two Basically, Google Data Studio's schema error view is 'richer' than that of Google's schema tool (stand-alone) which has been left behind a bit in terms of changing standards. Quite often you can put the pages highlighted by GSC (Google Search Console) into Google's schema tool,  and they will show as having warnings only (no errors) yet GSC says there are errors (very confusing for a lot of people) Let's look at an example: https://d.pr/i/xEqlJj.png (screenshot step 1) https://d.pr/i/tK9jVB.png (screenshot step 2) https://d.pr/i/dVriHh.png (screenshot step 3) https://d.pr/i/X60nRi.png (screenshot step 4) ... basically the schema tool separates issues into two categories, errors and warnings But Google Search Console's view of schema errors, is now richer and more advanced than that (so adhere to GSC specs, not schema tool specs - if they ever contradict each other!) What GSC is basically saying is this: "Offers, review and aggregateRating are recommended only and usually cause a warning rather than an error if omitted.  However, now we are taking a more complex view. If any one of these fields / properties is omitted, that's okay but one of the three MUST now be present - or it will change from an warning to an error. SO to be clear, if one or two of these is missing, it's not a big deal - but if all three are missing, to us at Google - the product no longer constitutes as a valid product" So what are the implications of having schema which generates erroneous, invalid products in Google's eyes? This was the key statement I found from Google: Google have this document on the Merchant Center (all about Google Shopping paid activity): https://support.google.com/merchants/answer/6069143?hl=en-GB They say: "Valid structured markup allows us to read your product data and enable two features: (1) Automatic item updates: Automatic item updates reduce the risk of account suspension and temporary item disapproval due to price and availability mismatches. (2) Google Sheets Merchant Center add-on: The Merchant Center add-on in Google Sheets can crawl your website and uses structured data to populate and update many attributes in your feed. Learn more about using Google sheets to submit your product data. Prevent temporary disapprovals due to mismatched price and availability information with automatic item updates. This tool allows Merchant Center to update your items based on the structured data on your website instead of using feed-based product data that may be out of date." So basically, without 'valid' schema mark-up, your Google Shopping (paid results) are much more likely to get rejected at a higher frequency, as Google's organic crawler passes data to Google Shopping through schema (and assumedly, they will only do this if the schema is marked as non-erroneous). Since you don't (well, you haven't said anything about this) use Google Shopping (PLA - Product Listing Ads), this 'primary risk' is mostly mitigated It's likely that without valid product schema, your products will not appear as 'product' results within Google's normal, organic results. As you know, occasionally product results make it into Google's normal results. I'm not sure if this can be achieved without paying Google for a PLA (Product Listings Ad) for the hypothetical product in question. If webmasters can occasionally achieve proper product listings in Google's SERPs without PLA, e.g like this: https://d.pr/i/XmXq6b.png (screenshot) ... then be assured that, if your products have schema errors - you're much less likely to get them listed in such a way for for free. In the screenshot I just gave, they are clearly labelled as sponsored (meaning that they were paid for). As such, not sure how much of an issue this would be For product URLs which rank in Google's SERPs which do not render 'as' products: https://d.pr/i/aW0sfD.png (screenshot) ... I don't think that such results would be impacted 'as' highly. You'll see that even with the plain-text / link results, sometimes you get schema embedded like those aggregate product review ratings. Obviously if the schema had errors, the richness of the SERP may be impacted (the little stars might disappear or something) Personally I think that this is going to be a tough one that we're all going to have to come together and solve collectively. Google are basically saying, if a product has no individual review they can read, or no aggregate star rating from a collection of reviews, or it's not on offer (a product must have at least one of these three things) - then to Google it doesn't count as a product any more. That's how it is now, there's no arguing or getting away from it (though personally I think it's pretty steep, they may even back-track on this one at some point due to it being relatively infeasible for most companies to adopt for all their thousands of products) You could take the line of re-assigning all your products as services, but IMO that's a very bad idea. I think Google will cotton on to such 'clever' tricks pretty quickly and undo them all. A product is a product, a service is a service (everyone knows that) Plus, if your items are listed as services they're no longer products and may not be eligible for some types of SERP deployment as a result of that The real question for me is, why is Google doing this? I think it's because, marketers and SEOs have known for a long time that any type of SERP injection (universal search results, e.g: video results, news results, product results injected into Google's 'normal' results) are more attractive to users and because people 'just trust' Google they get a lot of clicks As such, PLA (Google Shopping) has been relatively saturated for some time now and maybe Google feel that the quality of their product-based results, has dropped or lowered in some way. It would make sense to pick 2-3 things that really define the contents of a trustworthy site which is being more transparent with its user-base, and then to re-define 'what a product is' based around those things In this way, Google will be able to reduce the amount of PLA results, reduce the amount of 'noise' they are generating and just keep the extrusions (the nice product boxes in Google's SERPs) for the sites that they feel really deserve them. You might say, well if this could result in their PLA revenue decreasing - why do it? Seems crazy Not really though, as Google make all their revenue from the ads that they show. If it becomes widely known that Google's product-related search results suck, people will move away from Google (in-fact, they have often quoted Amazon as being their leading competitor, not another search engine directly) People don't want to search for website links any more. They want to search for 'things'. Bits of info that pop out (like how you can use Google as a calculator or dictionary now, if you type your queries correctly). They want to search for products, items, things that are useful to them IMO this is just another step towards that goal Thank you for posting this question as it's helped me get some of my own thoughts down on this matter

    | effectdigital
    1

  • Cool thanks, it's nice to see some real insights on this.

    | Andreea-M
    0

  • Impressions don't really mean much at all, as Google often experiments in terms of ranking sites for new keywords, then decides they are not relevant and takes the rankings away again. What we really need to see is 1yr+ of traffic (Google Analytics) and Clicks (Search Console). Even that may not be enough to define exactly what's going on The site is mainly tagged with the FA language, which is Persian. AFAIK (as far as I know) most Persian people live in Iran, which used to be called Persia (and which is still named Persia by some people, though on the international stage the nation is referred to as Iran) Right now there's a lot going on in the news between the USA (where Google is based) and Iran. It does make me wonder, does make me consider - could that be part of the issue? Obviously it would be impossible to gain clarification but... One thing I know, is that Google is foremost an American company, and that right now the USA has "engaged in a campaign of maximum financial pressure on the Iranian regime and intends to enforce aggressively these sanctions that have come back into effect" - source Who knows what's going on behind the scenes. Right now, Google is really clamping down on 'soft' medical practices within their SERPs which we know from all the YMYL / Medic updates. I know that Google only has limited presence in Iran (as you can see there, they won't even give Google-Iran a TLD, they use parameters in the URL structure to sort of generate a relevant page). This could in part be due to internet censorship in Iran. We know that even the app market, Google Play is extremely locked down in Iran Without taking sides or making any judgements on the international level (something we wouldn't do) - it does seem that Google have difficulties operating in Iran, in the same way that they operate in the West. The USA is clearly sending signals to Iran right now on the international stage (which are also being returned) - as such, it's not hard to see that an Iranian site (especially one with potential Medic / YMYL issues) might fail to rank on an American search engine Your site seems to use the "Netmihan Communication Company Ltd" ISP, which would confirm that the site is based in Iran (rather than just being built for an Iranian audience by those who may be external to Iran). I have the city down as Rasht Taking no sides here, it's possible that your site has become a casualty of international conflict (at least on the communication and economic level) and additionally of YMYL / Medic updates, which may have stung you regardless of your location Hope this is helpful to you, hope you will have a great day

    | effectdigital
    0

  • Just so you know Meta no-index can be applied through the HTML but also through the HTTP header which might make it easier to implement on such a highly generated website

    | effectdigital
    1

  • The main reason it's not good is that Google crawl from different data-centers around the world. So one day they may think the site is up, then the next they may think the site is gone and down Typically you use a user-agent lance to pierce these kinds of setups. Screaming Frog for example, you can pre-select from a variety of user-agents (including 'googlebot' and Chrome) but you can also author or write your own user-agent Write a long one that looks like an encryption key. Tell your client the user agent you have defined, let them create and exemption for it within their spam-defense system. Insert the user-agent (which no one else has or uses) into Screaming Frog, use it to allow the crawler to pierce the defense grid Typically you would want to exempt 'Googlebot' (as a user agent) from these defense systems, but it comes with a risk. Anyone with basic scripting knowledge or who knows how to install Chrome extensions, can alter the user-agent of their script (or web browser, it's under the user's control) with ease and it is widely known that many sites make an exception for 'Googlebot' - thus it becomes a common vulnerability. For example, lots of publishers create URLs which Google can access and index, yet if you are a bog standard user they ask you to turn off ad-blockers or pay a fee Download the Chrome User-Agent extension, set your user-agent to "googlebot" and sail right through. Not ideal from a defense perspective For this reason I have often wished (and I am really hoping someone from Google might be reading) that in Search Console, you could tell Google a custom user-agent string and give it to them. You could then exempt that, safe in the knowledge that no one else knows it, and Google would use your own custom string to identify themselves when accessing your site and content. Then everyone could be safe, indexable and happy We're not there yet

    | effectdigital
    0

  • I agree with you, in that most people wouldn't want to read 30 articles. All articles are related to consumer loans, but they vary when it comes to the "sub-subject" if I can use that term. I think I'll have to refine these silos to a more granular level. Been thinking of only putting the best moneypages together in one block, and then pick new tier layers according to importance and visitor stats. Thanks for the input.

    | llevy
    0

  • Other stuff (since I was able to reproduce exactly with a Romanian proxy): https://d.pr/i/sSkF9X.png (screenshot) So above you can see some links boxed in green, which have properly updated URLs (HTTPS only no WWW prefix) whereas the entries in red contain links which still contain the WWW protocol (incorrect, not updated) I can see that the GMB (Google My Business) listing is still linking to a very old version of the URL (HTTP WWW, so wrong protocol and prefix) - updating that might also be a positive signal to Google which could help I notice that the redirect (sometimes) doesn't go to OPs homepage, it goes to a child-variant of the homepage which contains parameters assumedly for tracking purposes (e.g: "https://probike.ro/?SID=nn565sjakv33nk6h2haenvbr7k"). The thing is, it's not (always) going 'straight' to the 'clean' version of OPs homepage (sometimes it does, sometimes not), and Google can sometimes be slightly adverse to indexing and listing parameter-based child URLs (unless they significantly alter content in a truly useful way, which this does not) Check out this video which shows it working perfectly as it should do, in Firefox: https://d.pr/v/v3lIiS (video) Looks fine right? But when I try in Chrome: https://d.pr/v/IABstn (video) ... just so you know, I have sometimes had the redirect work fine in Chrome and at other times I have seen the failure in Firefox, so it's not browser specific. I think it actually has more to do with session data or cookies, as I can usually reproduce the issue when I clear all browsing data, but every time I try to repeat it after that it's less likely to happen (in series) If Googlebot is following the 301 to some weird parameter URL instead of the true homepage that could be why Google is taking SO long to update this

    | effectdigital
    0

  • Personally it's something that I would nip in the bud, with 301 redirects. But if you are going to do that, make sure execution is flawless or you'll end up with problems

    | effectdigital
    0

  • If they are exactly the same listings in exactly the same order then yes, you probably don't need both of those URLs. I'd go back to the architecture and try and work out why so many duplicate URLs were created, what the logic on that is, fix it from the foundation. Messing around with tags that Google ignore half the time is seldom the answer. It 'seems' simple, but in reality doesn't usually properly fix the main issues. Canonical tags for example, do not consolidate backlink authority properly. 301s are an option but then it's like, why have I created a whole shadow section that just 301s to another section? By that point you begin to realise the ridiculousness of the structure and think about fixing it properly

    | effectdigital
    0