Now that Google admitted to crawling JavaScript and forms SEOs and Webmasters need to be aware of how to manage even more duplicate content issues.
In the past, a good strategy was to build out filter pages (filter by color, size, price, etc.) using JavaScript pull down menus. Google would typically stay away from such forms and you would not necessarily have to worry about Google seeing the same content filtered or sorted by color, price, size and so on.
But now with Google crawling JavaScript and forms, Webmasters need to take an extra step towards preventing Google from crawling and indexing such content. Why? Duplicate content.
A WebmasterWorld thread has discussion on this topic and offers tips on what to do, to help you with this problem. Some of the advice includes:
- Include the duplicate content in an external Js, assign it to variables, and do innerHTML to some divs.
- Use XmlHTTPRequest (GET) to retrieve the data in XML format and then put it into the page.
- Use an Ajax POST and retrieve the XML content with this.
- Use robots.txt to block specific files and/or page naming conventions.
Forum discussion at WebmasterWorld.