Last year, we looked at the impacts, if any, of negative SEO on websites, a discussion which arose from a Forbes article on the saboteurs of search. One year later, we're still discussing the consequences and seeing if negative SEO is still possible. In a WebmasterWorld thread, we learn that there are still ways to sabotage websites and rankings, from hijacking competitor's DNS to doing it to yourself by killing your URL structure. Now, though, there's more with new technologies and new concerns about sabotage, from cloaked sabotage to reputation sabotage. There's also the concern of parasite hosting and embedding hidden links.
As one webmaster says, you need to be the one who protects your site, because Google isn't necessarily reliable in that regard. (After all, Google is tracking billions of pages.) However, not many people know how to protect their own websites.
Tedster recommends that you can look into a variety of these problems by changing the user agent, disabling cookies, turning off meta refreshes, etc.
But how does it happen? As Receptional Andy says:
A lot of SEO sabotage attempts involve trying to trick Google into thinking that a site should be penalised and does not meet guidelines. That can be by directly modifying the site (through legitimate mechanisms to do so, or by finding vulnerabilities and exploiting them) and by modifying or setting up external references to a site.
You may think it won't happen to you, but don't be overly confident. Take a read and follow up with the discussion on WebmasterWorld.