I am seeing this too, Google flips the order of our brand name and our USP in our homepage title.
"USP - Brand Name"
now becomes
"Brand Name: USP"
Welcome to the Q&A Forum
Browse the forum for helpful insights and fresh discussions about all things SEO.
I am seeing this too, Google flips the order of our brand name and our USP in our homepage title.
"USP - Brand Name"
now becomes
"Brand Name: USP"
We do not currently have any sanitation rules in order to maintain the nocrawl param. But that is a good point. 301:ing will be difficult for us but I will definitely add the nocrawl param to the rel canonical of those internal SERPs.
Thank you, Igol. I will definitely look into your first suggestion.
Thank you, Cyrus.
This is what it looks like:
User-agent: *
Disallow: /nocrawl=1
The weird thing is that when testing one of the sample URLs (given by Google as "problematic" in the GWMT message and that contains the nocrawl param) on the GWMT "Blocked URLs" page by entering the contents of our robots.txt and the sample URL, Google says crawling of the URL is disallowed for Googlebot.
On the top of the same page, it says "Never" under the heading "Fetched when" (translated from Swedish..). But when i "Fetch as Google" our robots.txt, Googlebot has no problems fetching it. So i guess the "Never" information is due to a GWMT bug?
I also tested our robots.txt against your recommended service http://www.frobee.com/robots-txt-check. It says all robots has access to the sample URL above, but I gather the tool is not wildcard-savvy.
I will not disclose our domain in this context, please tell me if it is ok to send you a PW.
About the noindex stuff. Basically, the nocrawl param is added to internal links pointing to internal search result pages filtered by more than two params. Although we allow crawling of less complicated internal serps, we disallow indexing of most of them by "meta noindex".
Igal, thank your for replying.
But robots.txt disallowing URLs by matching patterns has been supported by Googlebot for a long time now.
I'll send you a PW, Des.
Hi Mozzers!
We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter.
We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming.
But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems.
What could be the reason?
Best regards,
Martin
It looks good now.
Thanks!
Hi,
I just received an email alert from Seomoz telling me my "Weekly Keyword Ranking & On-page Optimization Report " for the period 11/06/12 - 11/13/12 is ready.
It is just a copy of the previous report though, all rankings and ranking changes are the same.
What is up with that?
Best regards,
Martin
Thank you, Chiaryn. What I think is relevant when it comes to links on a page is 1. Total links? If the number is high, I would look into if all those links really are necessary. Can we cut down on links that does not point to landing pages in order to give landing pages as much link juice as possible and at the same time increase the chances that those pages get crawled? We have these kind of problems on our e-commerce homepage at the moment - important product pages do not get crawled and/or get too little link juice due to way too many links. 2. Total followed links? Are there links that should be nofollowed in order to establish optimal crawl path? 3. Total external links Too few or are we unnecessarily throwing away link juice off the site? 4. Total followed external links A healthy number or does it look spammy? Best regards Martin •
Today, we have way too many links on our homepage. About 30 of them are add-to-basket links (regular html links) pointing to a separate application. This application 302 redirects the client back to the referring page.
I have two questions:
1. Does the current implementation of our buttons dilute pagerank? Bear in mind the 302 redirect.
2. If the answer to the first question is yes, would transforming the buttons into form buttons change anything to the better? We would still 302 back to the referring page. I know Gbot follows GET forms and even POST forms, but does GBot pass on pagerank to the form URL?
It looks good now.
Thanks!