There shouldn't be a robots.txt file on the /blog section anyway, should always be in the root. It was just something to have a look at.
I'm having a look just now and also don't see any problems.
You've nothing in the robots.txt file and nothing in meta-robots for the header.
There's 42 pages in the site: command and a similar number in your sitemap.xml so I presume that's right. 6 pages in site:/blog which again looks right.
I've tried using SEOmoz's tools on your site though and it just tells me that your site doesn't resolve. edit Managed to get it to resolve on the 3rd try for a crawl, but using the on page report card checker it's still giving me problems.
You're definitely returning a 200 message with a site when I check using any other tool though, so I'd get in touch with SEOmoz directly and see what's wrong with their tool - help@seomoz.org
Just to confirm you're not doing anything tricky server side to prevent scraping are you?



I'm not convinced they're doing anything dodgy though, just lucky.