Beu notes a Google Webmaster Help Center document that the following paragraph has been removed:
“Consider creating static copies of dynamic pages. Although the Google index includes dynamic pages, they comprise a small portion of our index. If you suspect that your dynamically generated pages (such as URLs containing question marks) are causing problems for our crawler, you might create static copies of these pages. If you create static copies, don’t forget to add your dynamic pages to your robots.txt file to prevent us from treating them as duplicates.”
In his findings, Beu believes that Google is now able to "advance" in terms of how it is able to spider content. It seems, from the removal of that paragraph, that Google is actually able to understand the text on the multitude of dynamic pages.
Previous discussion on this topic is at Google Now Crawling Content Behind Forms, where Google admits to crawling Javascript.
Forum discussion continues at Sphinn.