Angela - have you had a chance to check your settings and/or header? Any luck?
Posts made by KristinaKledzik
-
RE: Xml sitemaps giving 404 errors
-
RE: Multiple Instances of the Same Article
Hmm, interestingly, when I followed your link, I only saw the canonical version of the article. Is this what you're seeing now?
Also, in response to your earlier question, yes, you can disallow parameters with robots.txt. If these canonical issues continue, that may be the best next step.
-
RE: Mass Removal Request from Google Index
Hi Ioannis,
What about the first suggestion? Can you create a page linking to all of the pages that you'd like to remove, then have Google crawl that page?
Best,
Kristina
-
RE: Mass Removal Request from Google Index
Hi Ioannis,
You're in quite a bind here, without a good URL structure! I don't think there's any one perfect option, but I think all of these will work:
- Create a page on your site that links to every article you would like to delete, keeping those articles 404/410ed. Then, use the Fetch as Googlebot tool, and ask Google to crawl the page plus all of its links. This will get Google to quickly crawl all of those pages, see that they're gone, and remove them from their index. Keep in mind that if you just use a 404, Google may keep the page around for a bit to make sure you didn't just mess up. As Eric said, a 410 is more of a sure thing.
- Create an XML sitemap of those deleted articles, and have Google crawl it. Yes, this will create errors in GSC, but errors in GSC mean that they're concerned you've made a mistake, not that they're necessarily penalizing you. Just mark those guys as fixed and take the sitemap down once Google's crawled it.
- 410 these pages, remove all internal links to them (use a tool like Screaming Frog to make sure you didn't miss any links!), and remove them from your sitemap. That'll distance you from that old, crappy content, and Google will slowly realize that it's been removed as it checks in on its old pages. This is probably the least satisfying option, but it's an option that'll get the job done eventually.
Hope this helps! Let us know what you decide to do.
Best,
Kristina
-
RE: JavaScript encoded links on an AngularJS framework...bad idea for Google?
Hm, I'd be a little concerned if GSC can see it. Maybe GSC can see that JS turns it into a link, but can't figure out what that link is?
Any way, sounds like your hands are kind of tied until you can get those nofollows! Definitely make a note in your analytics platform when you get them implemented - it'll be interesting to see what effect they have on your rankings.
Good luck!
Kristina
-
RE: JavaScript encoded links on an AngularJS framework...bad idea for Google?
Hi Kavit,
The short answer is no. Google can render some JS - possibly even AngularJS - so never assume that something rendered in JS is invisible to Google. You should assume that Google can see all links visitors can, and really push for a nofollow tag.
I usually check what Google can render by loading Google's cache of the page (go to Google.com and type in "cache:" in front of the exact URL of one of your pages). Look at the text-only version of the cache, and see if Google puts a link there. If they do, it's safe to assume that they can see that link. Another option is to use GSC to Fetch as Google; Google claims this is exactly what they're seeing.
If both the cache and GSC show that Google can't see a link, Google's probably not crawling it. But, Google's always getting better, and could suddenly see the links any day now. If these links are really a concern to you, I'd strongly suggest that you push your dev team to add nofollow tags to these outgoing links.
Best,
Kristina
-
RE: Any problem with launching a redesigned site early without a few product categories?
Sounds like you've got a solid plan! Good luck, and make sure to let your boss and coworkers know that you will almost definitely see a temporary decrease in organic traffic through this process. It doesn't mean that you've ruined it, just that Google needs to adjust.
Good luck!
-
RE: Any problem with launching a redesigned site early without a few product categories?
I'd recommend somehow keeping the current pages and continuing to link to them, or quickly creating crappy new versions of the pages on your site, just as placeholders. They can even have the proper H1, but the rest can be "coming soon."
In general, if you want to remove content, 404s are fine, it's only a problem if you want to bring that content back later; it'll essentially be new content if you 404ed, and you'll lose all of that good SEO mojo you'd build for that content over time.
If you're reducing the site down by that many pages but don't want to lose rankings, here's what I'd do:
- Identify all pages with a significant amount of organic traffic and/or inbound links.
- 301 redirect each of those pages to a page on your new site with similar content.
This can be pretty time consuming, but it'll save you a lot of organic traffic!
Good luck,
Kristina
-
RE: Hundreds of 404 errors are showing up for pages that never existed
There have been a few people at Moz with similar problems with GSC. People always throw a few ideas around: maybe Google is creating URLs to try to find pages that it can't find through crawling links alone? Maybe another site was trying to hack your site by creating URLs they were hoping would trigger certain content on your site (a laughable idea now, but I remember my college professor showing us a site that put cost parameters in the URL during check out)?
However they got there, though, Eric and Chris gave you some good ways to make sure that you're not still in trouble (if you ever were).
Hope this helps!
-
RE: Different URL structure Desktop VS Mobile Regarding SEO when building a new seperate mobile site
You can definitely build your new mobile site with a different structure than your old OScommerce site, just make sure that your desktop site has an alternate tag pointing to your new site. I'd also recommend adding canonicals to your old mobile URLs pointing to the new versions; that'll allow you to keep the old mobile site alive, but stop Google from showing it in its index.
Hope this helps!
Kristina
-
RE: How to check if the page is indexable for SEs?
I understand the difference between what you're doing and what Google shows, I guess I'm just not sure when I'd want to know that something could technically be indexed, but isn't?
I guess I'm not your target market!
Good luck with your tool. -
RE: How to check if the page is indexable for SEs?
Ah, gotcha. Personally, I use Google itself to find out if something is indexable: if it's my own site, I can use Fetch as Google, and the robots.txt tester; if it's another site, you can search for "site:[URL]" to see if Google's indexed it.
I think this tool could be really good if you keep it as an icon and it glows or something if you've accidentally deindexed the page? Then it's helping you proactively.

Hope this helps!
Kristina
-
RE: How to check if the page is indexable for SEs?
You're probably already doing this, but make sure that all of your tests are using the Googlebot user agent! That could cause different results, especially with the robots.txt check.
A sense check: what is your plugin going to offer over Google Search Console's Fetch as Google and robots.txt Tester?
-
RE: E-Commerce Mobile Pagination Dillema
Hi Sarah,
First of all, Martijn is right, any of your solutions are definitely not cloaking. Google understands the complications of pagination.
Your solution is probably the simplest, as long as that extra content doesn't slow page load too much. You can also paginate in a way that's even between mobile and desktop, so multiple mobile pages can canonical to the same desktop page. For example, if the desktop version of review pages lists 10 reviews per page, and the mobile version lists 5, then mobile pages 1 and 2 would refer to desktop page 1; mobile pages 3 and 4 would refer to desktop page 2; etc.
Either way, you're on the right track!
Kristina
-
RE: Structured Data on mobile and desktop version of a page
Hi Jochen,
SUPER interesting find, thanks for pointing this out, Jochen.
To me, this looks like Google understands that these two pages are the same page, except for different devices, and is using information on the desktop page to make their search results more robust for mobile.
You can see the connection by looking for Google's cache of your mobile page. The best way to do this is to search in Google for "cache:[URL]". If you search for "cache:http://m.avogel.ch/de/ihre-gesundheit/erkaeltung/alles_ueber_erkaeltungen.php", Google will send you to the desktop version of the page.
Here's my theory: Google has one index for both desktop and smartphone users, so it combines data and gives the user the best result possible. Google's doing more and more to try to improve its search results even without SEO intervention, so I'm not too surprised about this, but can't seem to find this in any SEO articles out there.
In answer to your question: I recommend that you continue to keep you mobile and desktop sites similar enough that Google is pulling from both. In the past, some SEOs would build sites differently for mobile users, but I've never seen any UX studies that shows that that's a better approach. Given that Google strongly recommends that you use responsive web design, it's certainly not Google's recommended approach.
I hope this helps? I'm not sure if this was a post because you were worried about something - this seems like good news to me!
Kristina
-
RE: Have an eBook. What is best practice for SEO?
Hi Laura,
It sounds like your ebook is assisting in the SEO of your website, since individual chapters are ranking. You can see how much of a page (or PDF) Google can read by searching for cache:[URL]. Here's Google's cache of chapter 8, which you shared.
But, you're on the right track, turning these pages into HTML pages will make them easier for Google to crawl, and you'll probably get more traffic out of it. Here's one way you could handle this:
- Keep http://re-timer.com/the-product/how-to-sleep-better/ as it is to encourage sign up
- Create a page for each chapter of the book, with the same content. Make sure to canonical the PDF chapters to the HTML counterparts.
- Link to those chapters somewhere else on the site, so it doesn't discourage people who land on the PDF download page.
That way, you get the benefits of the content individually, but keep the landing page.

Hope this helps!
Kristina
-
RE: Is healthygallbladder.com a spammy site?
Good move! This recently happened to my site as well; someone created an account and then sent thousands of sites from over 100 domains to that page. Best to disavow.
-
RE: Added sub-folder to GWT no data?
GSC is very finicky, I'd make sure:
- You're using the www subdomain, if that's how traffic typically gets to your site
- Google has domain.com.au/us/ indexed (check by searching for site:domain.com.au/us/ to see all of the pages Google has in its index)
- This subfolder gets traffic (check for organic traffic in Google Analytics)
Hope this helps!
Kristina
-
RE: Added sub-folder to GWT no data?
Peter brings up a good check: are domain.com.au/us/ pages in Google's index? If they're blocked, you won't get anything.