Language can seem almost infinitely complex, with inside jokes and idioms sometimes having meaning for just a small group of people and appearing meaningless to the rest of us. Thanks to generative AI, even the meaningless found meaning this week as the internet blew up like a brook trout over the ability of Google search’s AI Overviews to define phrases never before uttered.

What, you’ve never heard the phrase “blew up like a brook trout”? Sure, I just made it up, but Google’s AI overviews result told me it’s a “colloquial way of saying something exploded or became a sensation quickly,” likely referring to the eye-catching colors and markings of the fish. No, it doesn’t make sense.

The trend may have started on Threads, where the author and screenwriter Meaghan Wilson Anastasios shared what happened when she searched “peanut butter platform heels.” Google returned a result referencing a (not real) scientific experiment in which peanut butter was used to demonstrate the creation of diamonds under high pressure. 

It moved to other social media sites, like Bluesky, where people shared Google’s interpretations of phrases like “you can’t lick a badger twice.” The game: Search for a novel, nonsensical phrase with “meaning” at the end.

Things rolled on from there.

Screenshot by Jon Reed/CNET

Screenshot by Jon Reed/CNET

This meme is interesting for more reasons than comic relief. It shows how large language models might strain to provide an answer that sounds correct, not one that is correct.

“They are designed to generate fluent, plausible-sounding responses, even when the input is completely nonsensical,” said Yafang Li, assistant professor at the Fogelman College of Business and Economics at the University of Memphis. “They are not trained to verify the truth. They are trained to complete the sentence.”

Like glue on pizza

The fake meanings of made-up sayings bring back memories of the all too true stories about Google’s AI Overviews giving incredibly wrong answers to basic questions — like when it suggested putting glue on pizza to help the cheese stick.

This trend seems at least a bit more harmless because it doesn’t center on actionable advice. I mean, I for one hope nobody tries to lick a badger once, much less twice. The problem behind it, however, is the same — a large language model, like Google’s Gemini behind AI Overviews, tries to answer your questions and offer a feasible response. Even if what it gives you is nonsense.

A Google spokesperson said AI Overviews are designed to display information supported by top web results, and that they have an accuracy rate comparable to other search features. 

“When people do nonsensical or ‘false premise’ searches, our systems will try to find the most relevant results based on the limited web content available,” the Google spokesperson said. “This is true of search overall, and in some cases, AI Overviews will also trigger in an effort to provide helpful context.”

This particular case is a “data void,” where there isn’t a lot of relevant information available for the search query. The spokesperson said Google is working on limiting when AI Overviews appear on searches without enough information and preventing them from providing misleading, satirical or unhelpful content. Google uses information about queries like these to better understand when AI Overviews should and should not appear. 

You won’t always get a made-up definition if you ask for the meaning of a fake phrase. When drafting the heading of this section, I searched “like glue on pizza meaning,” and it didn’t trigger an AI Overview. 

The problem doesn’t appear to be universal across LLMs. I asked ChatGPT for the meaning of “you can’t lick a badger twice” and it told me the phrase “isn’t a standard idiom, but it definitely sounds like the kind of quirky, rustic proverb someone might use.” It did, though, try to offer a definition anyway, essentially: “If you do something reckless or provoke danger once, you might not survive to do it again.”

 AI Essentials: 27 Ways to Make Gen AI Work for You, According to Our Experts

Pulling meaning out of nowhere

This phenomenon is an entertaining example of LLMs’ tendency to make stuff up — what the AI world calls “hallucinating.” When a gen AI model hallucinates, it produces information that sounds like it could be plausible or accurate but isn’t rooted in reality.

LLMs are “not fact generators,” Li said, they just predict the next logical bits of language based on their training. 

A majority of AI researchers in a recent survey reported they doubt AI’s accuracy and trustworthiness issues would be solved soon. 

The fake definitions show not just the inaccuracy but the confident inaccuracy of LLMs. When you ask a person for the meaning of a phrase like “you can’t get a turkey from a Cybertruck,” you probably expect them to say they haven’t heard of it and that it doesn’t make sense. LLMs often react with the same confidence as if you’re asking for the definition of a real idiom. 

In this case, Google says the phrase means Tesla’s Cybertruck “is not designed or capable of delivering Thanksgiving turkeys or other similar items” and highlights “its distinct, futuristic design that is not conducive to carrying bulky goods.” Burn.

This humorous trend does have an ominous lesson: Don’t trust everything you see from a chatbot. It might be making stuff up out of thin air, and it won’t necessarily indicate it’s uncertain. 

“This is a perfect moment for educators and researchers to use these scenarios to teach people how the meaning is generated and how AI works and why it matters,” Li said. “Users should always stay skeptical and verify claims.”

Be careful what you search for

Since you can’t trust an LLM to be skeptical on your behalf, you need to encourage it to take what you say with a grain of salt. 

“When users enter a prompt, the model just assumes it’s valid and then proceeds to generate the most likely accurate answer for that,” Li said.

The solution is to introduce skepticism in your prompt. Don’t ask for the meaning of an unfamiliar phrase or idiom. Ask if it’s real. Li suggested you ask “is this a real idiom?”

“That may help the model to recognize the phrase instead of just guessing,” she said.



Read the full article here

Share.
Leave A Reply

2025 © Prices.com LLC. All Rights Reserved.
Exit mobile version