Last week, Google got in trouble, mostly in the EU regions, for allowing their ads, including AdSense, YouTube ads and other display ads, to show up on sites that have "hateful, offensive and derogatory content." In fact, many many advertisers dropped their ads from Google and then quickly returned.
Here is why those advertisers returned. Google announced today new controls to allow advertisers to define where they want their ads to show and where they want their ads not to show.
Google wrote:
Today, we’re taking a tougher stance on hateful, offensive and derogatory content. This includes removing ads more effectively from content that is attacking or harassing people based on their race, religion, gender or similar categories. This change will enable us to take action, where appropriate, on a larger set of ads and sites.
Here are three new tools Google has and will put into place to help this:
- Safer default for brands. Google is changing the default settings for ads so that they show on content that meets a higher level of brand safety and excludes potentially objectionable content that advertisers may prefer not to advertise against. Brands can opt in to advertise on broader types of content if they choose.
- Simplified management of exclusions. Google will introduce new account-level controls to make it easier for advertisers to exclude specific sites and channels from all of their AdWords for Video and Google Display Network campaigns, and manage brand safety settings across all their campaigns with a push of a button.
- More fine-tuned controls. In addition, Google introduce new controls to make it easier for brands to exclude higher risk content and fine-tune where they want their ads to appear.