Sufficient Words in Content error, despite having more than 300 words
-
My client has just moved to a new website, and I receive "Sufficient Words in Content" error over all website pages , although there are much more than 300 words in those pages. for example:
- https://www.assuta.co.il/category/assuta_sperm_bank/
- https://www.assuta.co.il/category/international_bank_sperm_donor/
I also see warnings for "Exact Keyword Used in Document at Least Once" although there is use of them in the pages..
The question is why can't MOZ crawler see the pages contents?
-
Hey Michal!
I'm really sorry you're running into this issue! Unfortunately, it looks to be cropping up because of the non-latin characters being used on the page. Our crawler has a very difficult time interpreting non-UTF-8 characters, and often reports counts and matches poorly when looking at pages comprised of them.I'm terribly sorry for the inconvenience! It's definitely something that we're looking to address down the road, but I'm afraid we don't have the resources to improve that functionality at the moment.
-
But the page is utf-8.. Take a look at the source code: view-source:https://www.assuta.co.il/category/assuta_sperm_bank/
Moz did a pretty good job with the old website - before it was changed, but now all my grades are 50

although the site is utf-8 (on a .net framework if that matters)
Can you re-check that please?
I think
Michal
-
Well, shoot. Apparently, I'm not quite is familiar with UTF-8 characters and that distinction as I thought. Totally my mistake.
The line for our tools falls closer to english character sets. Even seemingly small modifiers like accents can throw off our ability to accurately count or detect matches on pages. So, when it comes to Hebrew characters, we simply don't have a good way to handle it.