Welcome to the Q&A Forum

Browse the forum for helpful insights and fresh discussions about all things SEO.

Category: Technical SEO Issues

Discuss site health, structure, and other technical SEO issues.


  • Hi, thanks for your question. You would do this in Pardot. Accordingly, I recommend reaching out to their support team for details on how to do this. You may want to ask them how you can edit the landing page template(s)  you are using so that title tags are dynamically inserted (into your landing pages). Hope that helps! Christy

    | Christy-Correll
    0

  • Thank you so much for your response! Yes. Could you please email me at eliotostiguy@gmail.com? I will be able to give you the url via email

    | H.M.N.
    0

  • The head office happens to be the e-commerce store.  Then there are actual physical stores that sell the same products physically. So we do want visibility for 'HQ' as the main 'entity'.  Yes if anyone has a problem they contact the shop or HQ/e-commerce store.  So with that in mind I still need clarification of the schema to use.

    | MickEdwards
    0

  • Glad to be of service!

    | effectdigital
    0

  • Hi Bek, Thanks for your reply. Below is the message I get. Ive checked my source code which is <code># **Browse 164 live paralegal jobs** and as you can see i have the key word 'paralegal jobs' once only.</code> 'Why it's an issue: Although using targeted keywords in H1 tags on your page does not directly correlate to high rankings, it does appear to provide some slight value. It's also considered a best practice for accessibility and helps potential visitors determine your page's content, so we recommend it. Over-using keywords, however, can be perceived as keyword stuffing (a form of search engine spam) and can negatively impact rankings, so use keywords in H1 tags two or fewer times. To adhere to best practices in Google News and Bing News, headlines should contain the relevant keyword target and be treated with the same importance as title tags. See Four Graphics to Help Illustrate On Page Optimization.How to fix it:Use your targeted keywords at the beginning of your H1 headers once or twice (but not more) on the page. <dl class="page-grade-inner-list"> <dt>Optimal Format</dt> <dd> keywords in my headers </dd> <dt>Sample</dt> <dd> # The Moz Blog '</dd> <dd>Let me know you thoughts Bek please. Im quite confused about this error message I get and really need to get it sorted because I get same error message for other pages on my site which is www.purelegaljobs.com </dd> <dd>Thanks a lot</dd> <dd>Serg</dd> </dl>

    | Serg155
    0

  • Without any indicators that Google 'do' think the links are spammy, I wouldn't worry about this too much. If you start to notice performance issues which you can isolate to these footer links, then I'd no-follow them right away Usually site-wide links are only an issue between different domains, and even then - only if it's not a multi-domain site. A multi-domain site is usually where you have exactly the same site with linguistic differences, spread across multiple domains (so instead of having site.com/fr/ and site.com/en/, you have site.fr and site.co.uk). As long as the templates are highly, highly similar and Google begins linking the 'brand-entity' across those sites, there shouldn't be a problem Lot's of sitewide links placed in footers across the web (cross-domain) are paid for links to manipulate SEO rankings. Those are bad. If your links are 'editorial' in nature (e.g: the site owner or editor decided they were required for user benefit) then I wouldn't be so concerned. There's always the chance Google's algorithm could get it wrong, and you could eventually have a problem What you need to decide is, would you rather have some small performance issues now (by removing the links or no-following them) and prevent any further 'possible' action in the future? Or would you rather take a small risk, and keep your results solid. No one 100% knows how Google's algorithm(s) work (not even Googlers). As such, there are elements of chance at play here and only you can decide what you are happy with: A) Undo or no-follow the links now for a high chance of mild devaluation now and some affected results, but it will almost 100% stop any site-wide linking penalty (which could wipe out all results) from occurring. The damage of that would be devastating, but the chance of it occurring in the first place is low B) Leave the links as they are. Experience no mild devaluations or performance issues at all, for now. But possibly in the future, you get struck with a penalty and lose everything. The chances of that seem very low, but if it does happen... ouch Sometimes both your choices are less than ideal. But you still have to choose! If it were me, I think (with the information which you have supplied thusfar) I'd leave it alone for now (but watch performance like a hawk)

    | effectdigital
    0

  • Hi Gutenberg is simply a back office editor. It has no impact on SEO. How you organise your web pages is up to you, whether you use teh classic editor (for which there is still a plugin) or use the new editor. Neither affects SEO as long as the on-page content and technical aspects are correct. Relevent Title tags On Page content Images All the usual stuff! Regards Nigel

    | Nigel_Carr
    0

  • Hi Andrew, this sounds like it could be a question of EAT (expertise, authority and trustworthiness) something that Google places a very high value on. In a nutshell, Google wants to rank domains that are proven to be experts in their field and they place much higher value on EAT then they do other classic SEO tactics like keyword placement. Heres a hypothetical: If you, a large media site that focuses on a variety of topics, published an article about how a vegetarian diet is beneficial for your health that was written by a journalist and not a doctor, it would have a relatively low level of EAT. However if my smaller niche site that only focuses on health related issues and publishes content from doctors, nurses, trainers ect republished that article, it would have a higher level of EAT. There are several ways to improve your EAT, if you have qualified people writing content about their areas of expertise you should let the world know. Tweet about them, include their names in articles and give them an "about us" or "about the author" write up on your site. You can read your fill about EAT in the Search Quality Evaluater Guidelines if you need more details. https://static.googleusercontent.com/media/www.google.co.uk/en/uk/insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf

    | Jason-Reid
    0

  • Hi Surge, I created the image with a pro version of Screaming Frog SEO Spider https://www.screamingfrog.co.uk/seo-spider/ ( it's a local tool that you have to configure on a server or bigger sites) However given the size of your site I prefer to use https://www.deepcrawl.com , https://oncrawl.com , and https://botafy.com  I am currently 50% done with deep crawl which will provide more details than Screaming frog can. I can send you some very large private files is there way to do that? Or do you want them posted here? sincerely, Tom

    | BlueprintMarketing
    1

  • Dan, do you recommend using AMP for the 'in depth' article spots?

    | ericnsims
    2

  • If you had two different source codes served via user-agent (web-user vs googlebot) then you'd be more at risk of this. I can't categorically state that there is no risk in what you are doing, as Google operates multiple mathematical algorithms to determine when  'cloaked' content is being used - and guess what? Sometimes they go wrong That being said, I don't believe your risk of garnering a penalty is particularly high with this type of thing These are the guidelines: https://support.google.com/webmasters/answer/66355?hl=en You're in a really gray area because, you aren't serving different URLs - but you _could _be serving different content (albeit only slightly). I say 'could' rather than 'are' as it entirely depends upon whether Google (on any particular crawl) decides to enable rendered crawling or not If Google uses rendered crawling, and they take the content from their headless-browser page-render (which they can do, but don't always choose to as it's a more intensive crawling technique) then your content is actually the same for users and search engines. If however they just do a base-source scrape (which they also do frequently) and they take the content from the source code (which doesn't contain the visual cut-off) then you are serving different content to users and search engines Because you've got right down into a granular area where the rules may or may not apply conditionally, I wouldn't think the risk was very high. If you ever get any problems, your main roadblock will be explaining the detail of the problem on Google's Webmaster Forums here. Support can be very hit and miss

    | effectdigital
    0

  • Hi Vijay23, Why would you say that you have trouble indexing backlinks? Are you able to see which backlinks is google seeing? As far as my knowledge goes, there is no method for telling google to consider quickly to that you have a new backlink. Remember that google takes its time to analyze every website with its links. My recommendation is to wait at least 6-8 weeks after the backlink is placed to consider an impact. Also keep in mind that's really difficult to isolate the impact of just a backlink. Hope it helps. Best luck. GR

    | GastonRiera
    0

  • Yes, analyze the links pointing to that domain and verify there arent many SPAM links. Also, a link reclamation campaign will most likely be needed for brand mentions, which can be very time consuming.

    | WebMarkets
    0

  • This can involve many factors Is the new website on a dedicated IP and SSL? Did all of the URLs stay the same? Did any new Link Farms or SPAM links get added? Was all of the content transferred? ALL Same meta data and internal linking as well? Did the client happen to change addresses?

    | WebMarkets
    0

  • Aha I see! That makes some sense. If the products are 'branded' and therefore the name never changes in any language, you have two options Let's imagine you are selling a branded air conditioning unit, with the made-up name of GreenAir (maybe it's more economical and uses less electricity, thus the name from the 'green movement') You could just leave it duplicate: EN: GreenAir | GreenWave Solutions FR: GreenAir | GreenWave Solutions Or you could add more contextual info, which would be better: EN: GreenAir Environmental Air Conditioning Unit | GreenWave FR: GreenAir Unité de Climatisation Environnementale | GreenWave I know, I know - my French sucks (actually that's from Google Translate). But still, you can see that - you could add more in there. The hurdle for you will be, what is required in terms of costs to deploy to that level of complexity? From a straight-up SEO POV, I stand by my preference. But once mass translation work is factored and targeted, dev-based implementation... you may feel otherwise!

    | effectdigital
    0

  • Hi Robert. I will get the code checked and most probably set that redirect rule indeed. Many thanks for the advice!

    | GhillC
    0

  • That almost looks like... your client doesn't have WordPress actually installed on their sub-domain at all. It looks like they set up a 'something.wordpress.com' site, which WordPress actually hosts - and somehow overlayed their own sub-domain over it (using DNS / name-server shenanigans) If that is true then, since WordPress hosts the blog, there's not much you can do. If it is a local WordPress install that does exist on your client's actual website instead of being 'framed' in (or something shady like that) - then I haven't seen this error before and it seems really odd. It smacks of someone trying to cut corners with their hosting environment, trying to 'be clever' instead of shelling out for a proper WP install. Clearly there are limitations... Ok, there's only one other alternative really. This is also technical though and I don't know if it wold be any easier for your dev guys but... You can send no-index directives to Google without altering the site / web-page coding, as long as you are willing to play around with the (server-level) HTTP headers There's something called X-Robots which might be useful to you. You need to read this post here (from Google). You need to start reading from (Ctrl+F for): "Using the X-Robots-Tag HTTP header" As far as I know, most meta-robots indexation tag directives, can also be fired through the HTTP header using X-robots It's kinda crazy but, it might be your only option

    | effectdigital
    0

  • In general, I don't think that this is a great idea. Although Google does meter out crawl-allowance, Google also wants a realistic view of the pages which it is crawling. Your attempt at easing the burden of Google's crawl-bots may be seen as an attempt to 'fake' good page-speed metrics, for example (by letting Google load the web-page much faster than end users). This could cause some issues with your rankings if uncovered by a 'dumb' algorithm (which won't factor in your good intentions) Your efforts may also be unrequired. Although Google 'can' fire and crawl JavaScript generated elements, it doesn't always do so and it doesn't do that for everyone. If you read my (main) response to this question, you'll get a much better idea of what I'm talking about here. As such, the majority of the time - you may be taking on 'potential' risk for no reward Would it be possible to code things slightly differently? Currently you state that this is your approach: "This means that we are actively adding javascript code which will load the Intercom javascript on each page, and render the button afterwards" Could you not add the button through HTML / CSS, and bind a smaller script to the button which then loads the "Intercom javascript"? I am assuming here that the "Intercom javascript" is the large script which is slowing the page(s) down. Why not load that script, only on request (seems logical, but also admit I am no dev - sorry)? It just seems as though more things are being initiated and loaded up-front than are really required Google want to know which technologies are deployed on your page if they choose to look, they also don't want people going around faking higher page-speed loading scores If you really want to stop Google wasting time on that script, your basic options would be: Code the site to refuse to serve the script to the "googlebot" user agent Block the script in robots.txt so that it is never crawled (directive only) The first option is a little thermonuclear and may mean you get accused of cloaking (unlikely), or at the least 'faking' higher page-speed scores (more likely). The second option is only a directive which Google can disregard, so the risks are lower. The down-side is that Google will pick up on the blocked resource, and may not elevate your page-loading speed. Even if they do, they may say "since we can't view this script or know what it does, we don't know what the implication for end-users is so we'll dampen the rankings a little as a risk assessment factor" Myself, I would look for an implementation that doesn't slow the site down so much (for users or search-bots). I get that it may be tricky, obviously re-coding the JS from Intercom would probably break the chat entirely. Maybe though, you could think about when that script has to be loaded. Is it really needed, on page-load, all the time for everyone? Or do people only need that functionality, when they choose to interact? How can you slot the loading of the code into that narrow trench, and get the best of both worlds? Sorry it's not a super simple answer, hope it helps

    | effectdigital
    0