Hi there.
It looks like you need to read this: https://moz.com/beginners-guide-to-link-building
And couple more articles from here: http://bfy.tw/1cUh
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
Hi there.
It looks like you need to read this: https://moz.com/beginners-guide-to-link-building
And couple more articles from here: http://bfy.tw/1cUh
Alright, I see what you're talking about. I have tried to do similar search in Russian Google - it gave me the same suggestion.
https://www.google.nl/intl/en-NL/policies/technologies/cookies/
As far as I understand Google uses your browser locale and settings to offer different language, so, if you say all your settings are in dutch, then I'd look into your normal usage - do you mostly search in english? do you mostly browse english websites etc.
Additionally I noticed in past that even if you tell Chrome browser not to suggest to translate page, it still does it. There is even meme about that (can't find it now).
Right. Chain redirects = bad.
However, in the same video of Matt Cutts, he does say that the overall amount doesn't matter, and that's what I was talking about in first part of my previous answer.
Now, let's crunch some numbers to show you that the number of no-chain redirects doesn't matter.
Assume that we are in perfect world, so all given manufacturer given numbers actually right and all operations per second are actually operations per second 
Lets say that standard hosting server is 2GHz power = 2*10^9 computations per second
Since all htaccess work/computations are strictly on a server side (bots/browsers just send request to server for response if page should be redirected), the only time which can slow down the request is server response time.
Match computations are always considered low computation power processes.
so, let's say you have htacces with 1 000 000 redirect rules, server keeps it in memory to do match computations when bots make requests, it means that 2GHz server has to have 2000 requests per second to just START struggling.
So, do you have 2000 requests per second to your website and 1 million redirect rules? 
P.S. All number above are very rough approximations
P.P.S. If you really wanna see if your server is/ would struggle - login into web host manager, go to server status and info, look and see how much of your server power is usually being used. Usually that number is lower than 6-7% at 90% of the time.
Hope this clarify some things 
Hi.
It's not happening to me. Are you the only one who it happens to? Have you tried doing that from different machines?
Hello, my friend.
Well, whenever people says "don't have too many redirects", it doesn't mean not to have too many redirects in total count, for example, if you have old page
a.php redirected to b.php,
and old page c.php redirected to d.php
and so on - there is no any problem. However, what they mean is not to have consecutive redirects - eg.:
a.php redirects to b.php, which redirects to c.php, which redirects to d.php, instead of a.php redirecting to d.php straight forward.
Hope this helps.
Hi there.
According to this article about MOZ Local - https://moz.com/blog/announcing-local-reporting
Listing Reach is our indirect representation of how far the data aggregators have spread your listings across the local ecosystem, based on the number of results returned for exact-match searches of your NAP.
so, as far as i understand it's literally just estimate of number of searches. So, if there was large decrease in overall searches or decrease in searches for your industry and your name didn't show up in results - you'd get change in "Listings Reach".
So, it's not necessarily problem with MOZ local. I say just wait couple weeks and see if there are more jumps.
Theory states that duplicated content reduces certain keywords’ position in Google.
Wrong. Google might omit duplicate results or ban sites practising it, but it doesn't lower rankings based on number of duplicates or something. Otherwise wikipedia or any aggregating websites like car dealers etc would be nowhere to be found.
It also says that a web who copy content will be penalized.
Semi-wrong. It will be penalized if it's spammy and overdoing it.
Watch this video of Matt Cutts on duplicate content - https://www.youtube.com/watch?v=mQZY7EmjbMA
So, my understanding is that there is no 100% working way of getting down scrapers, because some of them are actually "good" scrapers. Like Facebook! - the biggest scraper in the world.
So, to beat them in rankings, just make sure that you are an authority in your industry, have awesome backlink profile and all aspects of SEO are properly implemented. And yes, sometimes those penalization tools can help.
Hello, Ana.
As Terminator said (read with proper accent): "Old, not obsolete!. And surely not not-used at the present time. Google does use 301s and canonical to understand informational architecture of the website.
About duplicate content on hreflang pages - so, those pages are in different languages?
Yes, I'm in US. That's why at the moment I'm like "What are you guys talking about?!" 
Hi.
Not sure what you're talking about, since my Search Console (GMT) is the same as last week.
To see queries there, go to Search Traffic -> Search Analytics.
And yes, the Google analytics does tell you the same data. However, this data on queries is very inaccurate. I'd never use it.
About tools - ahrefs has that. Under positions explorer -> organic keywords.
Hi, John.
Ok, there is a q/a video of matt cutts answering the question about "originality" of content in terms of if bigger website copies content from smaller author-website. (Can't find the link to it, may be other MOZers will help out here). Matt said that yes, it's possible. So, as far as I understand, Google can reassign original attribution. Especially, if your website was offline for long time.
At the same time, here is a Matt Cutts' video about duplicate content as a penalizing factor - https://www.youtube.com/watch?v=mQZY7EmjbMA
According to that video, unless you're very spammy scraper, you are going to be fine in terms of duplicate.
About slow gain of rankings - having lots of referring domains is not the guarantee of fast or good rankings. It surely helps a lot, but it's not the only thing. Have you optimized content, technical SEO etc? As of tools for penalties - use Google Webmaster tools - manual action section. If there is nothing there, you haven't been penalized by google 
About any recommendations - well, as I said, update/optimize content if needed, get your technical SEO in order. Since you said the rankings are growing and it has been a month since you've launched website - you're doing pretty good. It always requires time, my friend.
Hope this helps.
Hi.
Well, changing things around, not always means making them better. (I'm not saying that's your case). It looks like you need to have to hire an SEO company/consultant or at least have your website SEO audited. I don't think there is anything anybody can tell you what exactly to do to improve YOUR rankings for YOUR website under YOUR circumstances.
Overall, (as you have said yourself), to improve your rankings, you gotta optimize content, technical SEO (redirects, payload data optimization, metas etc), backlink profile. If you have done all this (and that's what I get from your question) and instead of rankings climbing, they drop, maybe the work, which has been done, needs to be redone...
Or it just wasn't long enough time for recrawl and reassessment of your website's content/structure after all those changes and Google penalties.
Uhm, "drop in SEO"?
How do we measure that?
I hope you meant "drop in rankings". Well, listing on a directory doesn't affect any rankings, that's for sure. And this is surely not the most important thing in SEO world to worry about. If you have concerns about your rankings dropping from removal from a single directory, you got much bigger problems, my friend 
Hello, Olga.
If you want to simply remove it (or at least tell Google bots that it's removed), use "410 Gone" status, not 404. For user experience, I'd redirected those subdomains to index page of domain, or, if you have time and resources, create a landing page, saying something like "Sorry for inconvenience, the content is not there anymore, use search to find whatever you need ", add a search bar of main domain content, viola!
Hope this helps.
Hi there.
Quote from GMT:
Note: When looking at the links to your site in Search Console, you may want to verify both the www and the non-www version of your domain in your Search Console account. To Google, these are entirely different sites. Take a look at the data for both sites. More information
I'd do it for both versions, just to be safe, even if you have proper redirects from one to another. If you don't have proper redirects - gotta do it right meow! 
Hope this helps.
As I said on your other post, it's most likely Magento's wierdness. Make sure you understand how that CMS works and so on.
Also, see where those pages are being linked from. Cause this is the way crawlers usually find those pages. When you find where it linked from, either delete or fix it.
Cheers!
First, I got confused, then confused, and confused again.
Did I mention i got confused? 
This is a Magento site, so I know some of these things get created automatically . .. but what on earth is going on here?
Magento is a keyword there. Any CMS does some super crazy stuff. Some more crazy than others.
Or it can be temporary MOZ bug. I'd recommend check those urls manually with preserving log in Chrome or something. This way you'll be able to understand if it's actually happening or it's just crawlers fault. If you don't see anything abnormal happening while checking those urls manually, don't waste much time on it, especially those pages seem not to exist or not main pages of the website.
As MOZ says, it's medium priority, there are other things to worry about.
Hope this helps.
The results I'm interested in are about "If that survey has helped you website". So, maybe impact on traffic volume, conversion rate and bounce rate.
We don't seem to have much of an issue with detecting exit intent. I think we do that as well as anyone does (and no-one seems to do it really well)
Exactly. So, if you say you don't have much issue with detecting exit intent, but at the same time saying that what you detect is incorrect, then how can yo say that you don't have an issue with detecting it? 
About the cookies - look into .unload() or onunload events. Most likely, it's gonna be much more complicated than that, but that's where i'd start looking.
5mil+ links?! Wow!
What's their spam score? I'm surprised they are not blocked or something 
To answer your question - what does common sense tells you? The job of google and google bots is pretty much based on common sense. So, duplicate content website, ridiculous amount of links, no referral traffic - all these are obvious signals to run, Forrest, run!