A WebmasterWorld thread has a savvy webmaster/SEO noticing that GoogleBot is not bothering to crawl and indexed the pre-rendered web page, instead, it is running all the JavaScript and then crawling/indexing the final rendered page.
The webmaster wrote:
I don't have a specific date, but Google has changed its crawling method. I think Google is now ignoring the "static" version (before the javascript runs) and just evaluates the "final" (rendered) version.For articles, I embed resized images that link directly to full-size versions. When the DOM is ready, javascript replaces those links with links to optimized gallery pages. Around July-August, full version images started dropping from Google Images results.
Funny thing is, some of them are now appearing under a different website that iframes my pages with sandbox (javascript never runs, so Google crawls full-size image links)
I've switched from replacing links to attaching window.open events and Google started to crawl & include full-size versions.
Have you noticed this as well? Is this new? If so, when did it start happening?
And if this is new, it is important to note for your own sites and pages.
Forum discussion at WebmasterWorld.
Update: Google's Pierre Far commented on my Google+ post saying "this is what we talked about back in May." Adding:
And please, everyone, if you want us to index your websites the best possible way, please allow crawling of all CSS and JavaScript. I know historically it was common to recommend blocking crawling CSS and JS files, but keeping these blocks today is actively harming your website's indexing. It's the easiest SEO you can do right now!