Hi there,
You've got the right idea, but let me suggest another tactic.
It's true that search functions can generate 1000's of urls that all tend to look like one another. Google suggests that you keep search result pages non-indexed, as these pages offer very little value and create tons of duplicate content.
http://www.seomoz.org/learn-seo/duplicate-content
Here's one way to handle your situation:
1. Put a meta "noindex,follow" tag in your search pages header, like this:
This tells search engines not to index the page, but allows them to follow the links on the page and flow link juice.
2. Hopefully you have a good site architecture and ways for search engines to discover your content. After step one, you can put a directive in your robots.txt file to block that directory from being crawled.
Something like:
User-agent: *
Disallow: /search/
Which blocks anything in the search directory.
3. Find out if search engines have already indexed a lot of your search pages by performing a site: search in Google, like so:
site:yourdomain.com/search
If you find pages in Google's index that shouldn't be there, you can use Google Webmasters URL removal tool to take these out of the index. You can remove the entire search directory with a single request.
http://support.google.com/webmasters/bin/answer.py?hl=en&answer=1663427
This is a powerful and sometimes dangerous tool, so be careful!
4. Finally, if you'd like to add "nofollow" to your search results pages, this should be fine, but only after you've completed the steps above.
Keep in mind, this is only one possible solution. If you have significant link juice flowing through your search results, this strategy may not be the best. But in general, you want to keep search results out of Google's index, so I'm comfortable recommending this strategy for 90% of all cases.
Hope this helps! Best of luck with your SEO.