The Moz Q&A Forum

    • Forum
    • Questions
    • My Q&A
    • Users
    • Ask the Community

    Welcome to the Q&A Forum

    Browse the forum for helpful insights and fresh discussions about all things SEO.

    1. SEO and Digital Marketing Q&A Forum
    2. Categories
    3. Technical SEO Issues
    4. Googlebot does not obey robots.txt disallow

    Googlebot does not obey robots.txt disallow

    Technical SEO Issues
    12 4 1.4k
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as question
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • TalkInThePark
      TalkInThePark last edited by

      Hi Mozzers!

      We are trying to get Googlebot to steer away from our internal search results pages by adding a parameter "nocrawl=1" to facet/filter links and then robots.txt disallow all URLs containing that parameter.

      We implemented this late august and since that, the GWMT message "Googlebot found an extremely high number of URLs on your site", stopped coming.

      But today we received yet another. The weird thing is that Google gives many of our nowadays robots.txt disallowed URLs as examples of URLs that may cause us problems.

      What could be the reason?

      Best regards,

      Martin

      1 Reply Last reply Reply Quote 0
      • dmccarthy
        dmccarthy last edited by

        What the domain.?

        TalkInThePark 1 Reply Last reply Reply Quote 0
        • TalkInThePark
          TalkInThePark @dmccarthy last edited by

          I'll send you a PW, Des.

          1 Reply Last reply Reply Quote 0
          • Igal_Zeifman
            Igal_Zeifman last edited by

            Hi

            I`m not sure if this is the best way to go about it.

            Robots.txt is commonly used for folder level disallow rules, I`m not sure how it will respond to parameters.

            Having said that, there are several things you can do here:

            1. You can use WMT to zero in on this parameter and  prevent it from being searched.
            To do so choose Configuration>>URL Parameters, answer "Yes" to the question about content change and  
                check-in the 3rd bullet (Only URL with value...) Of course you'll need to choose "1" as the right value.

            2. If this still didn't solve your issue, you might want to try using htacess + regex to prevent access by user agent.
                You can find user-agent information here Googlebot user agent list

            Also, you may want to check my blog post  about some of the less known Googlebot Facts (shameless self-promotion)

            Best

            Igal

            TalkInThePark 2 Replies Last reply Reply Quote 1
            • TalkInThePark
              TalkInThePark @Igal_Zeifman last edited by

              Igal, thank your for replying.

              But robots.txt disallowing URLs by matching patterns has been supported by Googlebot for a long time now.

              Igal_Zeifman TalkInThePark 2 Replies Last reply Reply Quote 0
              • Igal_Zeifman
                Igal_Zeifman @TalkInThePark last edited by

                Didn't say it wasn't. 🙂

                I`m just not sure how these rules apply to parameters, since they are not a part of the "core" URL.

                (For example: What happens if I take a URL from your site, change a nocrawl=1 to nocrawl=0 and link to it from mine?
                Do you have any URL sanitation rules in place to overcome that or will the page be indexed by Googlebot when it crawls my site and moves on to yours?)

                Personally, when dealing with parameters, I find it easier to work with WMT so I was offering an easier workaround, (at least for me)

                To tell you the truth, I would use hard-coded on page meta noindex/nofollow here (again, as parameters can be so easily manipulated).

                1 Reply Last reply Reply Quote 1
                • Cyrus-Shepard
                  Cyrus-Shepard last edited by

                  It can be tricky blocking parameters with robots.txt. The first thing you want to do is make sure your are actually blocking the URLs. There are a few good robots.txt checkers out there that can help:

                  • http://tool.motoricerca.info/robots-checker.phtml
                  • http://www.frobee.com/robots-txt-check

                  You're file is probably going to look something like:

                  User-agent: *
                  Disallow: /*?nocrawl=1

                  ... but this could vary depending on exactly you don't want crawled

                  +1 to Igal's suggestion of handling these via parameter settings in Google Webmaster Tools: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1235687

                  Finally, if your goal is to keep search results out of the index (it probably should be) then you should also highly consider using a meta robots NOINDEX tag on all search results pages. You can also slap a nofollow on links pointing to search results as this might also help Google steer clear of those pages.

                  Best of luck!

                  Edit: Here's what John Wu of Google Webmaster has to say...

                  "We show this warning when we find a high number of URLs on a site -- even before we attempt to crawl them. If you are blocking them with a robots.txt file, that's generally fine. If you really do have a high number of URLs on your site, you can generally ignore this message. If your site is otherwise small and we find a high number of URLs, then this kind of message can help you to fix any issues (or disallow access) before we start to access your server to check gazillions of URLs :-)."

                  Igal_Zeifman TalkInThePark 2 Replies Last reply Reply Quote 2
                  • Igal_Zeifman
                    Igal_Zeifman @Cyrus-Shepard last edited by

                    Thanks.

                    100% agree with the Meta Noindex suggestion.

                    1 Reply Last reply Reply Quote 0
                    • TalkInThePark
                      TalkInThePark @Cyrus-Shepard last edited by

                      Thank you, Cyrus.

                      This is what it looks like:

                      User-agent: *
                      Disallow: /nocrawl=1

                      The weird thing is that when testing one of the sample URLs (given by Google as "problematic" in the GWMT message and that contains the nocrawl param) on the GWMT "Blocked URLs" page by entering the contents of our robots.txt and the sample URL, Google says crawling of the URL is disallowed for Googlebot.

                      On the top of the same page, it says "Never" under the heading "Fetched when" (translated from Swedish..). But when i "Fetch as Google" our robots.txt, Googlebot has no problems fetching it. So i guess the "Never" information is due to a GWMT bug?

                      I also tested our robots.txt against your recommended service http://www.frobee.com/robots-txt-check. It says all robots has access to the sample URL above, but I gather the tool is not wildcard-savvy.

                      I will not disclose our domain in this context, please tell me if it is ok to send you a PW.

                      About the noindex stuff. Basically, the nocrawl param is added to internal links pointing to internal search result pages filtered by more than two params. Although we allow crawling of less complicated internal serps, we disallow indexing of most of them by "meta noindex".

                      Cyrus-Shepard 1 Reply Last reply Reply Quote 0
                      • TalkInThePark
                        TalkInThePark @Igal_Zeifman last edited by

                        Thank you, Igol. I will definitely look into your first suggestion.

                        1 Reply Last reply Reply Quote 0
                        • TalkInThePark
                          TalkInThePark @TalkInThePark last edited by

                          We do not currently have any sanitation rules in order to maintain the nocrawl param. But that is a good point. 301:ing will be difficult for us but I will definitely add the nocrawl param to the rel canonical of those internal SERPs.

                          1 Reply Last reply Reply Quote 0
                          • Cyrus-Shepard
                            Cyrus-Shepard @TalkInThePark last edited by

                            Sorry for the late reply. Feel free to send me a PM. (not sure I can help, but more than happy to take a look)

                            1 Reply Last reply Reply Quote 0
                            • 1 / 1
                            • First post
                              Last post
                            • Disallow wildcard match in Robots.txt
                              effectdigital
                              effectdigital
                              0
                              3
                              1.0k

                            • Will a robots.txt disallow apply to a 301ed URL?
                              Martijn_Scheijbeler
                              Martijn_Scheijbeler
                              0
                              3
                              158

                            • Will a Robots.txt 'disallow' of a directory, keep Google from seeing 301 redirects for pages/files within the directory?
                              DmitriiK
                              DmitriiK
                              0
                              4
                              408

                            • Oh no googlebot can not access my robots.txt file
                              BistosAmerica
                              BistosAmerica
                              0
                              9
                              6.3k

                            • Should search pages be disallowed in robots.txt?
                              MichaelWeisbaum
                              MichaelWeisbaum
                              0
                              3
                              335

                            • Can I Disallow Faceted Nav URLs - Robots.txt
                              AlanMosley
                              AlanMosley
                              0
                              5
                              914

                            • How do I use the Robots.txt "disallow" command properly for folders I don't want indexed?
                              portalseo
                              portalseo
                              0
                              5
                              1.9k

                            Get started with Moz Pro!

                            Unlock the power of advanced SEO tools and data-driven insights.

                            Start my free trial
                            Products
                            • Moz Pro
                            • Moz Local
                            • Moz API
                            • Moz Data
                            • STAT
                            • Product Updates
                            Moz Solutions
                            • SMB Solutions
                            • Agency Solutions
                            • Enterprise Solutions
                            • Digital Marketers
                            Free SEO Tools
                            • Domain Authority Checker
                            • Link Explorer
                            • Keyword Explorer
                            • Competitive Research
                            • Brand Authority Checker
                            • Local Citation Checker
                            • MozBar Extension
                            • MozCast
                            Resources
                            • Blog
                            • SEO Learning Center
                            • Help Hub
                            • Beginner's Guide to SEO
                            • How-to Guides
                            • Moz Academy
                            • API Docs
                            About Moz
                            • About
                            • Team
                            • Careers
                            • Contact
                            Why Moz
                            • Case Studies
                            • Testimonials
                            Get Involved
                            • Become an Affiliate
                            • MozCon
                            • Webinars
                            • Practical Marketer Series
                            • MozPod
                            Connect with us

                            Contact the Help team

                            Join our newsletter
                            Moz logo
                            © 2021 - 2026 SEOMoz, Inc., a Ziff Davis company. All rights reserved. Moz is a registered trademark of SEOMoz, Inc.
                            • Accessibility
                            • Terms of Use
                            • Privacy