Google says 'faked screenshots' made its AI search tool's problems seem way worse than they are

Some of the viral screenshots were real responses that Google says it's working on fixing. Others, not so much

We may earn a commission from links on this page.
Google
Google logo during Impact’24 congress in Poland on May 16.
Photo: Jakub Porzycki/NurPhoto (Getty Images)
In This Story

Google doesn’t want users to eat glue and rocks, but it does want people to double-check some of the more far-fetched content they receive from AI-enhanced searches.

Google rolled out AI Overviews, a generative artificial intelligence feature for its search engine, on May 14 during its annual I/O developer conference. The tool gives users an AI-generated summary of information from the internet in response to search queries. Just a week later users began sharing screenshots of some of the new tool’s weird and sometimes dangerous responses.

Advertisement

Many of these went viral, including responses that told someone to add glue to pizza and eat one rock a day — responses pulled from Reddit forums and satirical news sites. In another viral case, AI Overviews suggested that a depressed user jump off the Golden Gate Bridge.

Advertisement

Liz Reid, Google’s vice president of research, wrote a blog post May 30 meant to set the record straight. While some of the screenshots that spread across social media were real, she wrote, “there have been a large number of faked screenshots shared widely.”

Advertisement

“Some of these faked results have been obvious and silly. Others have implied that we returned dangerous results for topics like leaving dogs in cars, smoking while pregnant, and depression,” Reid wrote. “Those AI Overviews never appeared. So we’d encourage anyone encountering these screenshots to do a search themselves to check.”

The ones that were real highlighted areas where Google needs to smooth out some kinks in the new technology, Reid acknowledged. AI Overviews spit out strange responses to questions that haven’t really been asked before, such as, “How many rocks should I eat?” These cases are what’s called a data or information gap, and the tool pulled web content that best answered the question — which happened to be a satire site, she said.

Advertisement

But Google is proactively trying to address these issues, including building better detection mechanisms for nonsensical queries, limiting the inclusion of satire and humor content, restricting potentially misleading user-generated advice (such as Reddit conversations), and enhancing its already-strong guardrails on news and health topics.

“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” Reid wrote. “We’ve learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone.”