I am seeing this too, Google flips the order of our brand name and our USP in our homepage title.
"USP - Brand Name"
now becomes
"Brand Name: USP"
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
I am seeing this too, Google flips the order of our brand name and our USP in our homepage title.
"USP - Brand Name"
now becomes
"Brand Name: USP"
We do not currently have any sanitation rules in order to maintain the nocrawl param. But that is a good point. 301:ing will be difficult for us but I will definitely add the nocrawl param to the rel canonical of those internal SERPs.
Thank you, Igol. I will definitely look into your first suggestion.
Thank you, Cyrus.
This is what it looks like:
User-agent: *
Disallow: /nocrawl=1
The weird thing is that when testing one of the sample URLs (given by Google as "problematic" in the GWMT message and that contains the nocrawl param) on the GWMT "Blocked URLs" page by entering the contents of our robots.txt and the sample URL, Google says crawling of the URL is disallowed for Googlebot.
On the top of the same page, it says "Never" under the heading "Fetched when" (translated from Swedish..). But when i "Fetch as Google" our robots.txt, Googlebot has no problems fetching it. So i guess the "Never" information is due to a GWMT bug?
I also tested our robots.txt against your recommended service http://www.frobee.com/robots-txt-check. It says all robots has access to the sample URL above, but I gather the tool is not wildcard-savvy.
I will not disclose our domain in this context, please tell me if it is ok to send you a PW.
About the noindex stuff. Basically, the nocrawl param is added to internal links pointing to internal search result pages filtered by more than two params. Although we allow crawling of less complicated internal serps, we disallow indexing of most of them by "meta noindex".
Igal, thank your for replying.
But robots.txt disallowing URLs by matching patterns has been supported by Googlebot for a long time now.
I'll send you a PW, Des.
Hi Mozzers!
We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter.
We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming.
But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems.
What could be the reason?
Best regards,
Martin
It looks good now.
Thanks!
Hi,
I just received an email alert from Seomoz telling me my "Weekly Keyword Ranking & On-page Optimization Report " for the period 11/06/12 - 11/13/12 is ready.
It is just a copy of the previous report though, all rankings and ranking changes are the same.
What is up with that?
Best regards,
Martin
Thank you, Chiaryn. What I think is relevant when it comes to links on a page is 1. Total links? If the number is high, I would look into if all those links really are necessary. Can we cut down on links that does not point to landing pages in order to give landing pages as much link juice as possible and at the same time increase the chances that those pages get crawled? We have these kind of problems on our e-commerce homepage at the moment - important product pages do not get crawled and/or get too little link juice due to way too many links. 2. Total followed links? Are there links that should be nofollowed in order to establish optimal crawl path? 3. Total external links Too few or are we unnecessarily throwing away link juice off the site? 4. Total followed external links A healthy number or does it look spammy? Best regards Martin •
When I run the On-Page Analysis on our homepage, the report says the page has 238 **"Internal followed links". **
Why are not nofollowed internal links counted as well? Nofollowed links have been leaking link juice for quite some time now.
Martin
We do use nofollows on some internal links. Not for PR sculpting reasons, but for bot crawling reasons. There is no point in telling Gbot to crawl our buy button links.
Thanks for pointing me to the video, Elias.
If I understand Matt correctly, when having several links on a single page that point to the same destination URL, links 2-n will dilute PR.
Yes. But do the other two links to our contact page dilute the link juice that gets passed on to other links on the page?
Would you not think those two duplicate links would be counted as "nofollow" links by Google and thereby dilute link juice?
We have way too many links on our homepage. The PageRank Link Juice Calculator (www.ecreativeim.com/pagerank-link-juice-calculator.php) counts them to 300.
But all of them are not unique, that is some links point to the same URL.
So my question: does the "100 links/page recommendation" refer to all anchors on the page or only to unique link target URLs?
I know "100" is just a standard recommendation.