Robots.txt and redirected backlinks
-
Hey there,
since a client's global website has a very complex structure which lead to big duplicate content problems, we decided to disallow crawler access and instead allow access to only a few relevant subdirectories. While indexing has improved since this I was wondering if we might have cut off link juice. Since several backlinks point to the disallowed root directory and are from there redirected (301) to the allowed directory I was wondering if this could cause any problems?
Example: If there is a backlink pointing to example.com (disallowed in robots.txt) and is redirected from there to example.com/uk/en (allowed in robots.txt). Would this cut off the link juice?
Thanks a lot for your thoughts on this.
Regards,
Jochen
-
Hi Jochen,
It's an interesting situation and to be honest, I don't know for sure how search engines will deal with that "link juice". This will come down to a question of whether search engines see robots.txt or htaccess first. If it looks at robots first (which is my suspicion), it can't see that page to pass the strength.
I suppose to test this, you could submit the redirected page to index via Search Console and see if it shows you the redirect or says it's blocked.
Interesting question aside, there's no real need to block access to a 301'd page

Also, apologies if I'm just highlighting the obvious here but it would be far better to clean up the site structure and remove that duplication rather than just masking it with robots; the user experience is at least as important as the algorithms!
Along the same lines, cleaning up those pages is going to help your crawl budget immensely.
-
A noindexed page can still accumulate and pass link equity, although results vary on whether or not some of that link juice "evaporates" along the way. I'm inclined to agree with Chris, though, that there's probably no need to noindex a page that redirects to a page that you do want indexed.