While the page is visible in a browser the header returns a 404 error (you can check this using web-sniffer.org). Check the configuration of your web server to see why the 404 status is returned.
Dirk
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
While the page is visible in a browser the header returns a 404 error (you can check this using web-sniffer.org). Check the configuration of your web server to see why the 404 status is returned.
Dirk
Hi Becky,
You are correct - normally if a tag is fired it won't be counted as bounce (unless you set "noninteraction=true" - check https://support.google.com/analytics/answer/1033068#NonInteractionEvents)
Dirk
If you check your page with external tools you'll see that the general status of the page is 200- however there are different elements which generate a 4xx error (your logo generates a 408 error - same for the shopping cart) - for more details you could check this http://www.webpagetest.org/result/151019_29_14E6/1/details/.
Remember that Moz bot is quite sensitive for errors -while browsers, Googlebot & Screaming Frog will accept errors on page, Moz bot stops in case of doubt.
You might want to check the 4xx errors & correct them - normally Moz bot should be able to crawl your site once these errors are corrected. More info on 406 errors can be found here. If you have access to your log files you could check in detail which elements are causing the problems when Mozbot is visiting your site.
Dirk
I would leave it like this especially if these pages generate long tail search traffic. Having semi-duplicate pages isn't necessarily going to hurt you (check also: https://blog.kissmetrics.com/myths-about-duplicate-content/). Check also this article https://moz.com/blog/have-we-been-wrong-about-panda-all-along) and finally Google (https://support.google.com/webmasters/answer/66359?hl=en) :
"Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don't follow the advice listed above, we do a good job of choosing a version of the content to show in our search results."
If your site has enough pages with rich content & these "thin" pages have value as landing pages for your visitors don't start messing with it.
Dirk
It can be useful - it depends on what you want to know. If you do not implement either of them - the time on site will not be correct as there will be no time on site calculated for bounced visits.
Personally - I prefer to know if people scroll to the end of the page (so I assume they have read the article) rather than just put an arbitrary time to fire an event. It will in both cases make the time measurement on your site more accurate. Both ways of measurement will reduce the bounce rate.
I think it's certainly useful for e-commerce - but then I would rather use enhanced e-commerce tracking.
I don't really understand what you mean with "I thought that if you took into account the time spent on page, and set these parameters in analytics, that it wouldn't in fact be counted as a bounce?" - could you explain?
Dirk
To quote Google: "Duplicate content on a site is not grounds for action on that site unless it appears that the intent of the duplicate content is to be deceptive and manipulate search engine results. If your site suffers from duplicate content issues, and you don't follow the advice listed above, we do a good job of choosing a version of the content to show in our search results."
Having a backlink from the copied page to your site can help Google to determine that your page is the original source - however it's no guarantee (if his site has a better link profile it could well be that Google prefers the copied site). It would be better if the copy site would put a canonical url pointing to your page.
Dirk
Time on page has the same issue - suppose somebody visits your site - spends 10 minutes reading an article & then goes to another site. It will be counted as a bounced visit - but even worse - the 10 minutes spend on your site will not be measured in Analytics (check http://cutroni.com/blog/2012/02/29/understanding-google-analytics-time-calculations/)
This is one of the advantages of the Advanced Content tracking - it measures better what people are doing on your site. The fact that the bounce rate decreases for me isn't the big win - the fact that you get better time measurement on site & that you can check the interaction (do they scroll to the end) are the things that bring benefit.
If you don't want to use the tag manager - you can also do this with the normal tracking code: http://cutroni.com/blog/2014/02/12/advanced-content-tracking-with-universal-analytics/ (Cutroni is the Analytics Advocate @Google)
Dirk
Href lang tags should be reciprocal: If page A links to page B, page B must link back to page A, otherwise the annotations may not be interpreted correctly (source: http://googlewebmastercentral.blogspot.co.uk/2014/07/troubleshooting-hreflang-annotations-in.html)
Example: suppose http://www.example.com/es/myspanishpage.htm contains the spanish content & http://www.example.com/en/myenglishpage.htm the english version both url's should have the hreflang tags:
The error you get indicates that you tagged only one version & forgot to tag the other one.
You can check here if implementation is correct: http://flang.dejanseo.com.au/
Hope this helps.
Dirk
It's possible that your site hasn't been crawled yet (since you changed the robots.txt). You can see in your campaign dashboard (upper right corner) when the next crawl is scheduled.
Do you see any specific error codes on your dashboard?
Dirk
Difficult to tell how long it will take before it's completely removed (Google is never very clear on how long it takes). If it's urgent you could use the url removal tool :https://support.google.com/webmasters/answer/1663419?hl=en
Dirk
Hi
Well - given the troubles Moz has to build the regular index in the past months I doubt they will add this feature anywhere soon.
Don't forget that DA & Trust is an internal metric of Moz - which tries to predict how well your site will rank in the SERP's. As all tools, it is just an estimate of the real thing.
Dirk
You can't force Moz to crawl all backlinks or submit backlinks. While the Moz bot is crawling a huge number of sites/pages (check this page for the stats). Even when these figures are impressive - it's still only a fraction of the total web and what Googlebot is crawling/indexing. So it's quite possible that some backlinks were not (yet) discovered by Moz. Check also the FAQ on how the backlinks are discovered.There just was an update - so you'll have to check again after Nov. 15th.
As each supplier is using a different method to crawl the web - it's a good idea to check multiple resources to get your backlink profile (raven tools, afhref,...)
Dirk
In addition to Ria's answer - make them noindex/follow.
If these pages (2/3...etc) would have any value to be included in the SERP's you could consider using rel next/previous - indicating that these pages belong together and should be considered as one page. The way I understand your question - the noindex/follow is probably a better solution.
Dirk
No they will not be indexed. Check: http://googlewebmastercentral.blogspot.be/2007/03/using-robots-meta-tag.html:
If content values conflict, we will use the most restrictive. So, if the page has these meta tags:
We will obey the NOINDEX value.
Dirk
Hi
I guess you have already checked that the page is accessible for crawlers and has sufficient internal links to be discovered by the bots (if not - try crawling the site with Screaming Frog)
If it's only one page that isn't picked up you could try a fetch like google & submit to index to force the page being indexed.
It could be that Google considers this page as less important and crawls with lower frequency compared to the other pages (to deep in the site, to few links, not enough content,...)
Dirk
Hi John,
URL: goo.gl/gPC9FY - we rank high for all keywords containing the words "malvorlage" or "ausmalbild" + keyword - like "malvorlage stern"
Titles - no changes on titles recently - as most images were labeled with Malvorlage & we noticed that Ausmalbild became more popular - the new images we added were using "Ausmalbild" as title rather than "malvorlage"
Layout - that was one of the first things I considered - given the fact that the image search in Germany is still on the old presentation (preview of site rather than slider). We checked this and it seems that it's still the old version which is used.
Dirk
Hi Egol,
Know exactly what you mean - we had this on our Spanish site after a less successful (to put it mildly) site migration - our images remained in the search results however the site was no longer ours but from someone else (that was also the moment we discovered all our images were spread all over the web...). Multiple complaints (and months) later the site started gaining traffic again.
It doesn't seem the case here - I did manually check key images - and the site behind is always ours. I also used the "search by image" to see how many times it was copied - and although copies existed - it were mainly smaller sizes on non concurrential sites (like schools).
Positions remain good to excellent. Even more amazing is that for some keywords the position improved more or less at the same time as the drop in CTR
Can't understand why the click rate decreased that dramatically.
Dirk
Hi Rob
forgot to mention that - analytics was the first thing I checked and is working fine. That's why I checked search console.
Dirk
Hi Antonio,
Not sure which language you prefer - but you can find some sample codes here: https://developers.google.com/webmaster-tools/v3/samples - I tried the python example which was quite well documented inside the code, I guess it's the same for the other languages. If I have some time I could give it a try - but it won't be before the end of next week (and based on python)
Dirk
One of our sites in Germany had a very sudden drop in traffic (starting Oct. 7th). The site gets most of it's organic traffic from Image Search. Checking in Search Console revealed that
(we double checked - searching the keyword in "anonymous mode" still showed our results for main keywords in top image positions (first 2 rows)).
As an example (see attached screencopy) - keyword had clickrate of 1% (average) - dan dropped to 0.06% while the position remained stable.
Germany is still using the "old" version of image search (unlike the rest of the world) - which gives the site preview rather than just the image slider when you click on a result in image search. Our first thought that this was changed - but it seems that it didn't change.
Ideas what might cause this dramatic drop in click%?
There have been no major technical modifications on the site for the last 2 months.
thanks,
Dirk