My robots.txt file has the following Disallow parameters in it:
User-agent: *
Disallow: /?location=
Disallow: /print
Disallow: /form*
Disallow: /?start=
Disallow: /?sort=
Disallow: /?make=
Disallow: /?special=
Disallow: /?service=
Disallow: /?tab=
However, the Moz crawler still picks up all of the pages that include the above parameters in their URLs.
The site in question has c3300 vehicles on it, and as an example, every vehicle has an enquiry form that is dynamically generated and populated with the vehicle details, however, its just one form in reality, so we get penalised for duplicate content etc.