Google AI Overview controversy — why there’s a big backlash

A Google AI logo is on a phone held in a hand, in front of a Google Logo
(Image credit: Shutterstock)

In May 2024, Google rolled out a new feature that uses generative AI to provide a brief summary of the results to your search queries. Its goal is to reduce the number of clicks it takes for you to get the answers you need, thus contributing to a more fulfilling user experience. 

However, reports of grossly inaccurate and often harmful answers to search queries started populating the internet not long after, causing outrage on social media platforms like LinkedIn, X, and Facebook.

For example, when asked how many Muslim presidents the U.S. has had, Google’s AI Overviews claim that Barack Obama was the United States’ only Muslim president. But this isn’t the only example of things going horribly wrong. When asked how to prevent cheese from sliding off a pizza, Google’s AI says, “You can add ⅛ cup of non-toxic glue to the sauce to give it more tackiness.” 

AI Overviews also falsely claims that researchers from UC Berkeley recommend eating at least one small rock a day because they’re a vital source of minerals.

Many examples of AI failure

Google’s AI has also claimed that you can infuse spaghetti with gasoline for added flavor and that adding more oil to a cooking oil fire can help put it out.

In another example, the search giant's AI suggested that parachutes are no better than backpacks at preventing death when jumping off an aircraft. 

Understandably, experts started weighing in not long after, worried about the potential spread of disinformation among unsuspecting users.

How did Google AI Overviews go so wrong?

“We tend to think of information as a set of objective facts that just exist in the world,” writes Dr. Emily M. Bender, a professor who teaches linguistics at the University of Washington. “But in fact, information and the information ecosystem are inherently relational.” 

“When Google, Microsoft, and OpenAI try to insert so-called “AI” systems (driven by LLMs) between information seekers and information providers, they are interrupting the ability to make and maintain those relationships,” she adds.

Bender says that where the information is coming from is just as important as the information itself. If you are presented with an AI-generated response to your questions with no way to trace the information back to its source, you can’t tell if the “medical facts” you just learned come from a trusted source like the Mayo Clinic or from Dr. Oz.

Why did AI Overviews spread misinformation?

Generative AI has no way of knowing what’s true, it only knows what’s popular. So, it often ends up surfacing answers from untrustworthy sources or parody accounts instead of the actual facts. In some cases, AI is also prone to “hallucination”, where it just makes up false data to cover up its knowledge gap.

Google has always been a platform where users can find all sorts of information and disinformation. But without the ability to correlate the information against the source’s reputation, there’s no way to know if the answers you got are accurate.

The company said in a statement: "The vast majority of AI Overviews provide high-quality information, with links to dig deeper on the web."

"Many of the examples we've seen have been uncommon queries, and we've also seen examples that were doctored or that we couldn't reproduce".

If you are concerned about the potential for misleading results, you can follow our guide to blocking AI overviews in your Google results.

More from Tom's Guide

Category
Arrow
Arrow
Back to MacBook Air
Brand
Arrow
Processor
Arrow
Storage Size
Arrow
Colour
Arrow
Storage Type
Arrow
Condition
Arrow
Price
Arrow
Any Price
Showing 10 of 62 deals
Filters
Arrow
Load more deals
Ritoban Mukherjee

Ritoban Mukherjee is a freelance journalist from West Bengal, India whose work on cloud storage, web hosting, and a range of other topics has been published on Tom's Guide, TechRadar, Creative Bloq, IT Pro, Gizmodo, Medium, and Mental Floss.