Hey Garrett!
While the robots.txt file does load fine in a browser, our crawler is receiving a timeout when trying to access it. I tried to curl from our user-agent and received the same issue.
https://www.screencast.com/t/PDXVg4yf5Y
You may want to reach out to your hosting admin to see if they have changed a setting or added any server level bot security. You may also request that they whitelist our user-agent "rogerbot". If you need further assistance on this, you can always shoot us an email to help@moz.com for further assistance.