Believe it or not, I am not a huge fan of placing robots.txt files on sites unless you want to specifically block content and sections from Google or other search engines. It just always felt redundant to tell a search engine they can crawl your site when they will do so unless you tell them not to.
Google's JohnMu confirmed in a Google Webmaster Help thread and even recommended to one webmaster that he/she should remove their robots.txt file "completely."
John said:
I would recommend going even a bit further, and perhaps removing the robots.txt file completely. The general idea behind blocking some of those pages from crawling is to prevent them from being indexed. However, that's not really necessary -- websites can still be crawled, indexed and ranked fine with pages like their terms of service or shipping information indexed (sometimes that's even useful to the user :-)).
I know many SEOs feel it is mandatory to have a robots.txt file and just have it say, User-agent: * Allow: /. Why when they will eat up your content anyway?
Anyway, it is nice to see a Googler confirming this, at least in this case.
Forum discussion at Google Webmaster Help.