Questions
-
Prevent Rodger Bot for crwaling pagination
Robots.TXT Rules If you have architecture like: site.com/blog/post/page/1 Then use: User-agent: rogerbot Disallow: /page/ If you have architecture like: site.com/blog/post?p=1 Then use: User-agent: rogerbot Disallow: /*?p= If you have architecture like: site.com/blog/post?page=1 Then use: User-agent: rogerbot Disallow: /*?page= That should pretty much stop Rogerbot from crawling paginated content. It would certainly stop Googlebot, but I don't quite know if Rogerbot respects the "*" wildcard like Googlebot does. Give it a try, see what happens Don't worry, in the robots.txt file only "*" is respected as a wildcard, so you won't have any problems with "?" and there won't be any need for an escape character
Getting Started | | effectdigital0