Is my content being fully read by Google?
-
Hi mozzers,
I wanted to ask you a quick question regarding Google's crawlability of webpages. We just launched a series of content pieces but I believe there's an issue.
Based on what I am seeing when I inspect the URL it looks like Google is only able to see a few titles and internal links. For instance, when I inspect one of the URLs on GSC this is the screenshot I am seeing:When I perform the "cache:" I barely see any content**:**
VS one of our blog post
- Would you agree with me there's a problem here?
- Is this related to the heavy use of JS? If so somehow I wasn't able to detect this on any of the crawling tools?
Thanks!
-
If it's one day, i woud'nt worry about it. If the problem is still there after a week, let us know.
-
This post is deleted! -
Hi Jeroen,
It's been over 2 weeks that we published those pages.
-
When you put the full url in search, and hit the cache button, what does it show? A 1:1 copy of the current page or an outdated version?
-
I see the most recent page but with only some headings, breadcrumb links, some images but none of the content within
-
Is your page valid with W3C standards? https://validator.w3.org/
I mean if it's a spaghetti of errors then googlebot might encounter issues with that.
-
There are just 6 errors but there's a fatal error though. See screenshot.
-
A few errors is always within margin of what google encounters. Ive seen broken pages completely that would not even render properly anymore but yet still rank. I'm not sure i was just guessing; but perhaps you need to give it a bit of time. I dont know what is going on really, new domains kind of are dampened for a while in order for Google to keep the spammy domains out of the place really.
-
Thanks for your help!
I also did a site: "page content of several different pages" and I am getting our pages within the SERP so maybe there's nothing wrong since these pages are indexed. I guess we need to be waiting as you said.