Yep, I agree with Martijn Scheijbeler, I'm marking this question as "Answered" but not closing it so you can continue to discuss if needed 
Posts made by R0bin_L0rd
-
RE: Where are these "phantom visitors" and are they dangerous?
-
RE: Does a subdomain hurt/help a domain?
Hi there, in a similar vein to Kevin's comment - Google has said that subdomain/subdirectory isn't really a distinction they draw and that it's more to do with how much linking there is between the different parts whether they consider it to be the same website. That said, we don't have a good idea of what the threshold is or what historic data Google is using to make that call.
I would say this comes down to how people are using this subdomain. Do you get researchers landing on it from Google? Or is it something that they only use once they are already on your site? If you find enough researchers are coming to this subdomain from organic search then it's probably worth fixing it up because, entirely aside from the question of whether this subdomain is dragging your site down this could be a way to really strengthen your site and take advantage of a very popular resource to boost Google's respect for your site and help you rank for other things (not to mention giving your users what they want). If, on the other hand, you don't get users landing on this subdomain from organic search (you can check by going into Google Analytics for the subdomain and checking the source and medium of sessions landing on those pages) then it could be worth noindexing the subdomain. If, as you say, it's riddled with errors, you don't want Google to see it/waste time on it and users don't get there from search then by noindexing it you're saying "we don't want you to pay attention to this".
We can't know for sure how Google will treat the main domain based on this subdomain but we can plan a strategy based on use, which is often what Google tries to push webmasters towards.
-
RE: My website is struggling to receive traffic I think I have a serious error
Hmm, I don't think that addresses what we are trying to change. I think you will need to discuss with them the status code they are using as part of the redirect - unless I've missed something none of the above seems to include anything specifying status code.
-
RE: My website is struggling to receive traffic I think I have a serious error
If you type in the url examples you gave above but replace https with http you should hit a 302 (the ayima Google Chrome redirect plugin should show you).
Out of interest - how did the SEO agency make your site crash?
-
RE: My website is struggling to receive traffic I think I have a serious error
Wow, this is pretty thorough, good effort!
Chantelle, you mentioned that you did a http -> https migration, looking at some of the pages in http they seem to redirect to https using a 302 redirect. I would use a 301 redirect instead - 302 means "this move is temporary" so Google treats it as less strong. Essentially it doesn't give a clear enough signal to Google that the new pages are replacing the old ones, Google seems to have some http pages still indexed but I don't know whether it's just not picked up all the redirects. Once you have all the 301s sorted, I would submit a http sitemap to Google to prompt it to crawl the old http pages and realise they are redirected.
Chantelle, when did the drop happen? And how defined was it? It sounds like, from your initial message, you went from a lot of traffic to not much at all in the space of days but was it more gradual than that? Was it before the https migration/ When did the agency make the changes?
The rogue canonicals effectdigital have found sound like they might be confusing things, that could line up with the coverage issues you saw Chantelle.
effectdigital - those links you found. Does it look like they have been around and like that for a while? Or were they recently added?
-
RE: My website is struggling to receive traffic I think I have a serious error
Hi Chantelle, this sounds like it's been quite a concerning situation. There's a lot of information here and it looks like Effect Digital may have given some responses and have followed up on email.
This question is currently marked as "unanswered" and I don't want to mark it as answered unless you have got an answer. Would you be able to share any anonymised solution here to benefit other Moz users? If you haven't got a solution then post the latest development and we'll see what else we can do

-
RE: What do you do with product pages that are no longer used ? Delete/redirect to category/404 etc
No worries, glad to help. Good luck!
-
RE: Can you rank for copyrighted/trademarked words that became generic terms?
Hi there, interesting question!
So in terms of whether you are allowed to try to rank for brand name/trademarked keywords the answer is yes, absolutely. Google makes decisions about which sites it thinks are most relevant for a search and you don't have any responsibility to shy away from that attempt.
In terms of whether it's possible for you to rank for those keywords, that's actually kind of related to the point above. Google decides what should rank based on best user experience. If Google has really strong evidence that whenever someone searches a particular term they are looking for a specific brand it'll be very hard for you to break into that. However, as you've mentioned, there comes a time when a term becomes generic enough that users aren't necessarily searching for the brand, that's when you'll have more and more chance with pages using the term as a generic term. You can fairly quickly check by just Googling the terms and seeing what comes up. For example, when I search "spinning" the fourth text result is "Boom Cycle" - sounds like it doesn't just have to be a brand called "Spinning" for that term. If on the other hand you Google Apple - it's pretty clear Google thinks there's only one topic that's relevant as a result.
If it's a term you think your users will be searching for, create some content for it. If it's a stretch to think you'll rank, create something good but not terribly time consuming and go from there. If it looks like the only content showing up is about this brand, consider creating a post about the differences between that and what you offer, as a way to seem a bit more relevant for Google.
Hope that helps
-
RE: What do you do with product pages that are no longer used ? Delete/redirect to category/404 etc
Hi Gemma, interesting question! I'd consider a few things;
- While the product pages rank well for long-tail keywords, are they driving much organic traffic or, more importantly, organic revenue from people landing on the page?
- If it's possible to reuse the pages for new products - what are the downsides?
- What would the best user experience be for out-of-stock products? How similar are the new ones to the old ones?
In terms of question 1, if these product pages are numerous enough to be a source of concern, I'd want to know if you're getting any benefit out of them being indexed. If not then removing them from the index could be a simple solution and would help avoid things like searchers landing on an out-of-stock product. E-commerce clients of mine have often found that organic conversion rate for sessions landing directly on product pages tends to be worse because it's relying on the visitor wanting pretty much exactly that product to be interested whereas category pages can show off more of the range.
In terms of 2, if it's an option to reuse the existing product pages, why have you shied away from doing that until now? If you have new products which are similar enough to the old products that means users coming to the page are more likely to get what they want (rather than just being redirected to the category page, or hitting an out-of-stock or 404 page). Also, if each product is keyword researched and the products are similar enough, presumably new products will be competing with old ones for similar long tail keywords?
If neither 1 or 2 work, I'd focus on what I'd want as a user. It can be frustrating to land on a 404 page, either through search or on the website, but it can also be frustrating and confusing to be redirected straight to a category page or similar product. Maybe the user would want to see the out-of-stock page with the option of being taken to similar products? Again for me it'd come down to how much you think each of these unique products could fulfil similar criteria for the visitor.
Hope that helps, as you may have picked up from my response I don't think there is one universal right answer but there is likely a best for your site. Happy to discuss further
-
RE: Nuisance visitors to non active page. What's going on?
Hi there, so just to confirm - you have a page that is redirected to your homepage (I'm assuming not with a JavaScript redirect) and your Google Analytics is reporting that page is getting traffic from 12-reasons-for-seo.com?
This sounds like Measurement Protocol spam, I'll explain why I think that based on the way GA works but to reassure you - if that's the case then all it's messing with is your analytics. Google doesn't use Google Analytics data to inform search rankings - in terms of "bounces" Google just uses data about whether a searcher, on a Google page, clicks one result and then comes back to the search results page and clicks on another.
- Google Analytics records page views based on receiving a message from some JavaScript code that runs on (ideally) every page of your website
- If the page is redirected before it loads (as in not a JavaScript redirect) that code won't have time to run, so Google Analytics won't see you having a page view on that page, even if someone tried to access it
- Measurement Protocol is a way you can manually send hits to Google, it's a way of recording all kind of things that are difficult to reflect in terms of pageviews and events on your site (i.e. someone bought a product over the phone or someone viewed the same page on your app)
- Anyone can get your GA ID by looking at your page code and once they have that, they can send fake pageviews to your Google Analytics.
If it's true that all of these pageviews are landing on a page that doesn't even load before it's redirected (that's important, because if it's a JavaScript redirect your GA code might have time to run) then the best solution for this is probably to find what all of these hits have in common (sounds like you've already found a couple things) then create a Google Analytics filter for your reporting view which excludes this traffic as specifically as possible (to reduce risk of accidentally dropping real traffic)
-
RE: Content update on 24hr schedule
When you say 1300 landing pages are coming online every night that doesn't mean 1300 new pages are being created does it? Based on the rest of your comment I'm taking it to mean that 1300 pages, which were already live and accessible to Google, are being updated and the content is changing if appropriate.
In terms of the specific situation I describe above, that should be fine - there shouldn't be a problem with having a system for keeping your site up-to-date. However, each of the below things, if true, would be a problem;
-
You are adding 1300 new pages to your site every night
-
This would be a huge increase for most sites, particularly if it was happening every night, but as I say above I don't think this is the case
-
You are actually scraping key information to include on your site
-
You mention an API so it may be that users are submitting this content to your site for you to use but if you are scraping the descriptions from some sites, and reviews from others that is what would be viewed as spammy and it seems like the biggest point of risk I've seen in this thread.
-
-
RE: Content update on 24hr schedule
Hi, I think you've already got a couple of good answers here but just to throw in my thoughts; to me this would all come down to how much value you're getting for the volume of content you're creating.
It sounds to me like you have 1.3k product landing pages, and you're producing 80 articles a month, plus maybe you're indexing the review pages too?
I think frequency here becomes secondary to how much each of these things are adding. If you are indexing the reviews pages for specific products, those pages could just be diluting your site equity. Unless they are performing a valuable function I'd consider canonicalising them to the product pages. As the others have said, having product pages that regularly update with new reviews shouldn't be a problem but with all the content you're adding to the site you could be relying on Google indexing these changes far more quickly than it actually is.
If you're adding a large number of articles every month - are those articles cannibalising other pages, or each other? The way I'd try to gauge if it's too much is whether the pages are getting traffic, whether you're having a lot of flip-flopping in the keywords you're targeting, and whether you're starting to get issues with Google indexing all of your pages. Similar to the review pages, if the articles are providing value to your readers, getting you links or getting you a decent amount of traffic then grand, if they aren't generating much I'd consider producing less or removing/redirecting poorly performing articles after a little while to preserve site equity and help focus Google's crawl.
On the note of posting frequency, I would agree with Gaston that it's about what's right for your readers. If a lot of article-worthy content comes out at the same time, I'd post about it then and there, if this is just content you're coming up with and adding and timing doesn't matter, spreading it throughout the month makes sense in terms of staying fresh, getting the articles indexed, and honestly not having to rush deadlines/delay release.
-
RE: Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Yea that's definitely tricky. I'm assuming you haven't taken out any load balancing that was previously in place between desktop and m. meaning your server is struggling a lot more? The Page Speed Insights tool can be good info but if possible I'd have a look at that user experience index to get an idea of how other users are experiencing the site.
A next port of call could be your server logs? Do you have any other subdomains which are performing differently in search console?
In terms of getting Google to crawl more, unfortunately at this point my instinct would be to keep trying to optimise the site to make it as crawl-friendly as possible and wait for Google to start crawling more. It does look like the original spike in time spent downloading has subsided a bit but it's still higher than it was. Without doing the maths, given that pages crawled and kilobytes downloaded have dropped, the level of slowdown may have persisted and the drop in that graph could have been caused by Google easing back. I'd keep working on making the site as efficient and consistent as possible and try to get that line tracking lower as an immediate tactic.
-
RE: Crawl Stats Decline After Site Launch (Pages Crawled Per Day, KB Downloaded Per Day)
Hi there, thanks for posting!
Sounds like an interesting one, some questions that come to mind which I'd just like to run through to make sure we're not missing anything;
- Why do you have Crawl-delay set for all user agents? Officially it's not something Google supports but the reason for that could be the cause of this
- Have you changed any settings in search console? There is a slider for how often you want Google to crawl a site
- Have you had the Search Console notification that you're now on the mobile-first index?
- When you redirected the mobile site, was it all one-to-one redirects? Is there any possibility you've introduced redirect chains?
- After the redesign - are the pages now significantly bigger (in terms of amount of data needed to fully load the page)? Are there any very large assets that are now on every page?
- When you say responsive, is it resizing based on viewport? How much duplication has been added to the page? Is there a bunch of content that is there for mobile but not loaded unless viewed from mobile (and vice versa)?
- When you moved the images, were they the same exact image files or might they now be the full-size image files?
This is just first blush so I could be off the mark but those graphs suggest to me that Google is having to work harder to crawl your pages and, as a result, is throttling the amount of time spent on your site. If the redesign or switch to responsive involved making the pages significantly "heavier" where that could be additional JavaScript, bigger images, more content etc. that could cause that effect. If you've got any sitespeed benchmarking in place you could have a look at that to see whether things have changed. Google also uses pagespeed as a ranking factor so that could explain the traffic drop.
The other thing to bear in mind is that combining the mobile and desktop sites was essentially a migration, particularly if you were on the mobile-first index. It may be that the traffic dip is less related to the crawl rate, but I understand why we'd make the connection there.
-
RE: International SEO - UK & US
Hi Harrison, thanks for posting - good question!
I think the common understanding is that Google doesn't necessarily treat .co.uk as less relevant in the US per se but things like the domain can hurt clickthrough for example, which can end up having pretty much the same rankings impact anyway (I mean, we're making assumptions here, it would be hard to divorce the two).
I'd usually be all-for making decisions that are as future-proof as possible and you probably wouldn't want to have to go through a US migration in future so I'd lean towards getting one of those neutral ".com" domains you mention which you can use now and in the future.
That being said, you'd be starting from absolutely nothing in terms of trust signals to the new domain so if this business venture is reliant on getting quicker feedback about how you do if you optimise for the US, I think it's also legitimate to take the subfolder approach in the short term, it's just a bit less common and may force you in to a folder migration later if you think you are being limited by the domain.
What do you think?
-
RE: Open graph tags
Hi there! Great to hear you're baking this in from the start.
Open Graph is good to include, Twitter also has its own markup and schema could be a real help if you haven't already considered it. Have you read this post by Cyrus Shepard? It has a pretty comprehensive list of different tagging and some good examples.
Hope that helps!
-
RE: Same URL for languages sub-directories
Hi Rachel, I think István makes a good point about the translated urls but just as a quick follow-up to your original question - it should not cause technical problems to have the page names the same while they are in different directories, because the total file path should be different, as long as you have hreflang properly set up.
Regarding your question about canonical tags - I would not canonicalise some of these language variants to other language variants, even if you do decide to make the page names the same. Hreflang is saying "these two pages are different language variants of the same thing" whereas canonical tags are saying "this page is just the same as this other thing" - the canonical tag doesn't have the language component so could conflict with your hreflang and cause errors with things like return tags. At the very least it could confuse Google as to which page should rank in which country, for instance, how can the /de/ page rank in Germany if we're telling Google it's not the canonical version of the page, but the other one from root is?
Hope that helps!
-
RE: No index tag robots.txt
For the sake of balance, probably worth mentioning that I'm with David in that I've seen a robots.txt noindex work. It has been relatively recently used by a large publisher when they had an article they had to take down but which Google was holding on to. That's irrelevant nuance in this situation but I think David deserves more credit than he got here.
In terms of this specific fix I agree with Nigel - remove the Disallow and add a noindex (prompt Google to crawl the pages, with a sitemap if they don't seem to be shifting). You can re-add the Disallow if you think it's necessary but once all of the appropriate pages have a noindex tag they should stay out of the index and if they are heavily linked to on the site disallowing them could result in a loss of link equity (it'll stop with the link to the disallowed pages). So if you think you can achieve this with just a noindex you might want to leave it at that.
-
RE: Fetch as Google showing Tablet View, not Desktop View
Hi, I agree with Joseph that what you're seeing is probably due to the style breakpoints you have on that page.
**However - **while fetch and render gives us some insight into how Google might interpret a page, it is not what Google is "seeing". Google is viewing the page code and, while there is some evidence it will deprioritise content which is hidden with methods like display:none I really don't think your site will suffer because your break point is different from the one in fetch and render. Changing the break point could conceivably involve quite a lot of front end dev or working with your site templates to avoid causing issues. I understand your concern but I honestly wouldn't put this to the top of your priority list.
Hope that helps!