Yesterday the Automated Content Access Protocol group released their ACAP Technical Framework (Extension of robots.txt format PDF).
Danny does an excellent job explaining the implications for search engines with his ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?
In short, he says, let's wait and see if the major search engines adopt the new protocol which calls for more "emphasis on granting permissions and blocking" as well as additional support for "time-based inclusion or exclusion."
Will webmasters adopt the new protocol? Why should they? If search engines adopt them, then webmasters and SEOs have a reason to adopt them as well. Until then? Why bother?
Forum discussion at WebmasterWorld.