There are archive tools out there. I have not used but take a look at http://www.screenshots.com/, https://archive.is, http://www.competitorscreenshots.com/ You might find what you are looking for, but don't assume all pages will be crawled for various reasons.
From Wayback:
If you look at our collection of archived sites, you will find some broken pages, missing graphics, and some sites that aren't archived at all. Here are some things that make it difficult to archive a web site:
- Robots.txt -- We respect robot exclusion headers.
- Javascript -- Javascript elements are often hard to archive, but especially if they generate links without having the full name in the page. Plus, if javascript needs to contact the originating server in order to work, it will fail when archived.
- Server side image maps -- Like any functionality on the web, if it needs to contact the originating server in order to work, it will fail when archived.
- Unknown sites -- The archive contains crawls of the Web completed by Alexa Internet. If Alexa doesn't know about your site, it won't be archived. Use the Alexa Toolbar (available at www.alexa.com), and it will know about your page. Or you can visit Alexa's Archive Your Site page at http://pages.alexa.com/help/webmasters/index.html#crawl_site.
- Orphan pages -- If there are no links to your pages, the robot won't find it (the robots don't enter queries in search boxes.)
As a general rule of thumb, simple html is the easiest to archive.

