A Google Webmaster Help thread has discussion about a potential duplicate content issues between HTML and PDF documents. In this case, the content found on the HTML is the same as on the PDFs. Be it an automated "print as PDF" feature or manual download of the content in PDF format.
How does Google handle the duplicate nature of such content available on the web?
JohnMu at Google chimed in saying that in most cases, they will use the HTML file. He does recommend that in these cases, you block the PDFs from being crawled and indexed. But ultimately, he said, that is your call. Google will likely just want to keep the HTML version in their index.
John said:
If you have the same content in PDF as in HTML pages, in most cases we'll probably show the HTML versions above (or in place of) the PDF versions. If this is a problem for your specific situation, I'd consider using the robots.txt or x-robots-tag to prevent the PDF files from getting indexed. I imagine for most sites this is not really a problem, so I wouldn't suggest blocking indexing of PDF files without confirming that it's really necessary.The only situation where I would consider doing something in advance is when the CMS automatically creates PDF-copies of normal HTML pages. Generally speaking, this shouldn't cause any problems, but those PDF versions are likely not compelling enough to merit getting indexed separately (and crawling them will possibly put a load on your server that you could avoid). Ultimately, it's up to you to determine which content you wish to have crawled and indexed :-) -- if you feel that PDF-copies of your content are compelling enough for users who search for your content, feel free to make them available.
Forum discussion at Google Webmaster Help.