A HighRankings Forum thread asks why do some people use more than a single robots.txt file to control and instruct search spiders how to crawl and access their content. That is a good question. Typically, the spiders will only listen to the robots.txt file found in the root level. So technically, if you place a robots.txt on a subdomain, the search engine will likely ignore it. I do not believe the same applies to subdomains, where subdomains have their own root levels.
HighRankings administrator, Randy, said:
robots.txt anywhere but the Root level will be ignored by the spiders. In fact it would surprise me if it's ever even queried. robots.txt is not like .htaccess where you can control things on a per directory level.The only way a subdirectory robots.txt might be valid is the rare case where someone has a domain name parked on a subdirectory of another domain. Or possibly if the subdirectory is really a subdomain, though that one too is questionable in my mind and isn't something I've tested to see if spiders look for a robots.txt for each subdomain.
I love what Ron Carnell added:
FWIW, I almost always back up a file before modifying it. My ex-wife always said I had trust issues? At any rate, I probably have a few copies of robots.txt laying around on more than a few sites. I don't worry about it because, as you pointed out, the only one that counts is in the root.
I believe Google often uses individual sitemaps per subdomain, to control their content.
Forum discussion at HighRankings Forum.