Google's Head of Search, Liz Reid, wrote a blog post last night named "AI Overviews: About last week." She basically said that overall the vast majority of the AI Overviews are really good and they did find examples of where they can make improvements. But AI Overviews are here to stay and Google will continue to show them in Google Search.
As you remember, Google launched AI Overviews a couple of weeks ago in the US. Then over time, many started to see and share weird and embarrassing (sometimes harmful) examples of AI Overviews, which led to Google updating its help documentation and Google's CEO going on defensive.
She said, "We found a content policy violation on less than one in every 7 million unique queries on which AI Overviews appeared." "We’ll keep improving when and how we show AI Overviews and strengthening our protections, including for edge cases, and we’re very grateful for the ongoing feedback," she added.
Some are calling the improvements, the updates, made to these AI Overviews, the Ray Update. Mike King suggested the name on X, saying, "'m gonna name the first algorithm update of the AIO era. We're gonna call this one the "Ray Filter" or the "Ray update" named after Lily Ray." Lily was instrumental at pushing Google to work harder on these AI Overviews by sharing countless examples of where they went wrong.
Here are some bullets on what Liz Reid said, I go a bit deeper on Search Engine Land and there is more coverage on Techmeme. I should note, this is what she said, not what I am saying:
- Searchers like the AI Overviews and are engaging with them and the publishers referenced in them more
- AI Overviews work very differently than chatbots and other LLM products
- AI Overviews are integrated into core search and only show information that is backed up by top web results
- AI Overviews generally don't “hallucinate” or make things up in the ways that other LLM products might
- When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.
- AI Overviews accuracy rate is as good as featured snippets
- Google said they've "seen nonsensical new searches, seemingly aimed at producing erroneous results.
- There have been a large number of faked screenshots shared widely
- "But some odd, inaccurate or unhelpful AI Overviews certainly did show up," Google admitted
- There can be "data voids" and "information gaps" where Google might cite pages it should not, like satire documents (like in the case of "How many rocks should I eat?"
- In some cases Google said the AI Overviews misinterpret language on webpages and present inaccurate information
So now what will Google do to improve AI Overviews?
- Google won't individually fix each one that goes bad, it updates its models to improve what went wrong so it works for other queries too
- Google built a better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview, and limited the inclusion of satire and humor content.
- Google updated its systems to limit the use of user-generated content in responses that could offer misleading advice.
- Google added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
- For topics like news and health, Google said it already have strong guardrails in place. For example, Google said it aims to not show AI Overviews for hard news topics, where freshness and factuality are important.
- In the case of health, Google said it launched additional triggering refinements to enhance our quality protections.
Here are some posts on X on this:
Well, Google finally responded to the AI Overview drama from last week.
— Lily Ray 😏 (@lilyraynyc) May 31, 2024
The response makes sense, but some of the things mentioned surprised me a bit, like this part:
“Because accuracy is paramount in Search, AI Overviews are built to only show information that is backed up by…
(finally) Google shares as update about AI Overviews 😃
— Gagan Ghotra (@gaganghotra_) May 31, 2024
🧵
Some thoughts in the thread as I read this posthttps://t.co/VXqJHhXWjD pic.twitter.com/DtWpdMTsU3
"In the case of health, Google said it launched additional triggering refinements to enhance our quality protections." https://t.co/thYAzJab2Y
— Glenn Gabe (@glenngabe) May 31, 2024
Tech news dump: Google acknowledges AI Overviews mishaps but doubles down saying most queries of concern were either fake or nonsensical
— Jennifer E. (@jenn_elias) May 31, 2024
Buried in the post: It has improved detection for satirical content and is limiting the use of user-generated content https://t.co/8TtojYq6zN
As per the patent, in certain cases, the LLM IS generating responses based on its training data. In some cases after creating response it seeks out sources to verify other times it does not.
— Rich Sanger SEO 🦙 (@richsangerSEO) May 31, 2024
2/ pic.twitter.com/cIjzkDDAp6
I'm gonna name the first algorithm update of the AIO era. We're gonna call this one the "Ray Filter" or the "Ray update" named after @lilyraynyc.@rustybrick you onboard? pic.twitter.com/7U9KMLij09
— Mic King (@iPullRank) May 31, 2024
Forum discussion at WebmasterWorld.