Moderated by Jeffrey Rohrs from Optium. The panel has changed slightly since past versions, with each speaker actually presenting their own PPT. He says there have been some great changes in the past year.
First speaker will be John Marshall, the CEO of ClickTracks. They have a natural bias towards what happens when people click on an ad, since that is the world in which they live. Distinguishing a badly designed ad from click fraud is difficult…they look familiar.
They did a case study and noticed they suddenly got lots of traffic from one particular ad. It looked suspicious, but was it click fraud? An alternative explanation was that is was an ad that appeared on a new affiliate site. The affiliate generated low quality clicks. They went through the thought experiment. The clicks were not converting into sales. Lots of clicks from India. Does this mean it was obviously CF? However, it is entirely possible that the ad was picked up by a publication hosting AdSense that is particularly targeted towards India. The ad sounded interesting to the readers, but when they find out you are in the UK and don’t ship to India, they go away.
Back to case study…came from many diff IPs. Mostly (89%) from the US. They had various user agents, loaded images, activated JavaScript and did the things that real browsers would normally do. However, the majority was going to one page, and the referrer just kind of “looked wrong.” In the end, they submitted it as questionable and got a refund. The moral is that detecting CF from actual traffic can be difficult. They don’t see the type of stuff like repeated clicks from one IP address. Encourages attendees to move away from a model where you think some sort of automated system can tell you if CF exists. This potential problem requires human judgment. It requires a knowledge of your specific website and visitor demographics. For example, if you have an average time on site that suddenly looks low.
An effective approach uses computer-assisted detection for the obvious stuff like repeated visits from an IP. If using a “lack of ROI” to tell you something is wrong, this wont work. For many keywords, there literally is no ROI. Look at campaigns which are different in some sort of definable way. There can be false positives, like airport metal detectors. You should fix poorly-performing ads just as quickly as you would “fix” click fraud. The techniques described, by giving false positives, still provides a value since the overall campaign will benefit from the changes suggested. Like the airport metal detector, you want to tune the system or mental protest to create more false positives than not. The reason being, like with an airport metal detector, you’d rather have that than false negatives.
Next up is Shuman Ghosemajumder from Google. He is excited to be able to present slides this time around. Asks some questions. Where does CF come from? Main incentives would be to attack advertisers and inflating affiliates. Numerous methods are used: Manual clicking, click farms, pay-to-click sites, click bots, and botnets. Shows a screenshot of a botnet console…in some cases these are very sophisticated.
Important to distinguish between CF and “invalid clicks.” CF is difficult to ID, since there is a question of intent. From a theoretical perspective, if they could read people’s minds, they could create a set that included click fraudsters. Like John said, you want to make sure that if you are sensitive enough you will actually catch the activity. There will be some examples where they don’t catch it. They throw the net widely enough so that they have a statistically confident feeling they will get them right. There are a significant number of clicks marked as invalid. The advertiser then doesn’t pay for a real click, so that is good. They are thus providing an enhanced ROI, in a way.
The actual systems that they use is complex and involves numerous algos, etc. There are three principle stages: 2 proactive and one reactive. The proactive methods are filters and offline analysis. The reactive is investigations, which are relatively rare. All advertiser inquiries are investigated by the quality team. CF estimates vary widely. 2004 50-70% (?). 2005 30% of clicks (marketing Experiments). 2006 15% (outsell) 12% (Click Forensics). The reality at Google is that there are a significant (<10%) number of clicks detected as invalid. This wide net ensures nearly all invalid clicks are detected proactively. Reactively detected invalid clicks are negligible proportion (<0.02%).
Google wants to see more over-reporting versus underreporting. By checking each of the advertiser complaints, they can continue to fine tune their reactive technique. So where do fictitious clicks come from. Clicks that actually never happened would be reported by an advertiser. For example, one person reported 20 CF suspects during a time period when only 5 clicks occurred during that time. They found this was a basic problem of ignoring a basic fact of web analytics. Most versions of Firefox and IE will technically reload a page that looks like the original click on the listing. The way to resolve this is to use redirects, or AdWords auto-tagging. ClickFacts and ClickForensics both ask that all their advertisers use auto-tagging.
There are many features unique to Google. They have the only industry actual reports of clicks not counted. Averages are meaningless from the POV of an individual advertiser, they must look at their own data. Google is trying to become more transparent over time, but the challenge is that they do not want to educate the fraudsters. He compares the problem that crime forensics teams now have due to a wiser public able to hide crime more efficiently thanks to shows like CSI.
Next up is Tom Cuthbert from Click Forensics. He wants to talk about progress that is being made on the CF front. He is hearing things are improving. They have been building their team with even more talented individuals to help detect the problem. Search providers have made great progress, with Google’s plans for IP exclusion functionality to Yahoo naming a VP to oversee the issue (who will speak next. However, none of this eliminates the need for a third party monitoring. Other industry progress includes an awareness of CF at an all time high. IAB Click Measurement working group. Click Quality Council meeting monthly. And the “Enhanced Click Fraud Network” launches (from Click Forensics). They give free reports up to 100,000 clicks each month.
The numbers: Overall threat level by quarter. In Q3 and Q4 2006, the numbers increased to close to 14% overall, with 19% in the content network the overall average. Terms that cost over $2 have a click fraud rate of over 20%!
What is next? They have been constantly enhancing their products and services. They like the site exclusion process. They also like the ad scheduling feature of their tool, as well as the country of origin functionality. They recently were named the best tool to fight click fraud, by Inc. magazine. In the next few months they will also comment on things beyond CF, that are also areas that advertisers need to monitor that make up different pieces of the “bad click” family.
Last is Reggie Davis, who has been working for 2 months at Yahoo! as their new VP of marketplace Quality. He spent the last several years managing litigation at Yahoo, including the big CF case (?forgot the name). Their goal at Yahoo! is to create the world’s highest quality search and display advertising network. It is clear they need better disclosures, the executive commitment, and build industry leading technologies and teams. They want to move from the paradigm of front-end filtering and back end refunding based on submitted reports. They want greater visibility and control for advertisers, and more dialogue.
Numbers never disclosed before: between 12 and 15% of overall average clicks coming through have been tagged and discarded. They feel that a percentage of it is CF, but also some lower quality traffic. He shows a graph which displays how the filters work based on rulesets. Thousands of filters are used to assess all attributes of each and every click. Other initiatives: improved publisher assessment. They take actions if they feel that partners are violating terms. Partners using popups, etc have been terminated. They also are seeing an increased advertiser adoption of conversion tracking tools, which helps. They are also making improvements to the matching technologies. They have seen a significant reduction in the number of claims made by advertisers.
Shows some quotes from various advertisers that are happy with panama. He announces today for the first time the new Yahoo! “Marketplace Quality Center.” This is a one-stop location for advertisers come in (password protected) and do research around this subject. They decided to setup the privacy center to be very simple and also thorough. Some pages allow for the advertisers to submit a click inquiry. They will then do the analysis. The area also includes “how-tos” for installing conversion trackers, detecting suspicious activity, etc
Initiatives for 2007: quality-based pricing. Domain blocking will be released in 2007, allowing for the advertisers to help shape the overall quality of their campaigns. Continuing detail in their investigations. When refunds are provided, there will be better clarity and analysis of the reasons why. They also will strongly support the IAB efforts. Next steps are industry definitions and standards, audit against those standards, and let’s keep talking!