Best practice to prevent pages from being indexed?
-
Generally speaking, is it better to use robots.txt or rel=noindex to prevent duplicate pages from being indexed?
-
Isn't the main question: Why do you have duplicate pages, are these essentials - the easiest option would be to remove them. But in terms of whats the best option, here is a great article from Moz: http://moz.com/learn/seo/robotstxt
I would read that and decide based on your websites and situation the option best suits you.
In my opinion I would suggest: