Glue on pizza? Two-footed elephants? Google’s AI faces social media mockery

Social media has been buzzing with examples of Google’s new, “experimental” artificial intelligence tool going awry. The feature, which writes an “AI overview” response to user queries based on sources pulled from around the web, has been placed at the top of some search results.

But repeatedly, social media posts show that the tool is delivering wrong or misleading results. An NBC News review of answers provided by the tool showed that it sometimes displays false information in response to simple queries.

NBC News was easily able to reproduce several results highlighted in viral posts online, and found other original examples in which Google’s AI tool provided incorrect information.

For example, an NBC News search for “how many feet does an elephant have” resulted in a Google AI overview answer that said “Elephants have two feet, with five toes on the front feet and four on the back feet.”

Some of the false answers verged into politically incorrect territory. An NBC News search for “how many muslim presidents in us,” the results of which were first posted on social media, returned a Google AI overview that said “Barack Hussein Obama is considered the first Muslim president of the United States.” Obama, however, is a Christian. Google said this overview example violated its policies and that it would be “taking action.”

“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experience using Search,” a Google spokesperson said in a statement.

“The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web," the spokesperson continued. "We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we’ve taken action — and we’re also using these isolated examples as we continue to refine our systems overall.”

It’s difficult to assess how often false answers are being served to users. The responses are constantly shifting, and on social media, it’s difficult to tell what is real or fake.

Some Google users have created workarounds to avoid the new AI Overview feature altogether. Ernie Smith, a writer and journalist, quickly built a website that reroutes Google searches through its historical “Web” results function, which avoids the AI Overview or other information boxes that prioritize some results over others. Adding “udm=14” to Google search URLs strips the new feature from results.

Smith told NBC News that his new website has quickly gained traction on social media, surpassing the traffic of his entire decade-old blog in just one day.

“I think people are generally frustrated with the experience of Google right now,” Smith said in a phone interview. “In general, the average person doesn’t feel like they have a lot of agency.”

A Google spokesperson said the company believes users are deliberately attempting to trip up the technology with uncommon questions. Some deeper dives into why the answers have gone awry suggest that the tool is pulling from surprising sources.

404 Media reported that a Google search query for “cheese not sticking to pizza” pulled an 11-year-old Reddit comment that jokingly suggested mixing Elmer’s Glue into the sauce. Even though Google has now removed the AI suggestion from searches for “cheese not sticking to pizza,” according to an NBC News search, the top result is still the Reddit post, with the comment about Elmer’s Glue highlighted.

A Google spokesperson wrote that queries like “cheese not sticking to pizza” are not searched very often, and are only being noticed because of the viral posts about wrong answers on social media platforms like X — of which there are many.

The same issue with an old Reddit comment also occurred for a search for “how to rotate text in ms paint,” referring to the Microsoft Paint application. The top Google search result, viewed by NBC News, directs the reader to a sarcastic Reddit comment that says to press the “Flubblegorp” key on your keyboard. This key does not exist. This example was originally posted on social media.

Despite Google’s assertion that the tool is working well for many users, mistakes of the AI Overview are continuing to gain visibility and hype. Some of the answers that have been posted online seem to be fake, indicating that the trend has shifted from authentic errors to a new meme format. Even Grammy Award-winning artist Lil Nas X jumped on it, posting a seemingly fake AI Overview about depression.

“There seems to be a general vibe of disbelief with what it’s getting wrong,” Smith said. “It seems to reflect a sense of distrust with Google and other players of its type.”

Advertisement