Glue pizza and eat rocks: Google AI search errors go viral

Google logo and "Making AI helpful for everyone" sign seen at an event in Poland on May 16 2024
[Getty Images]

Google's new artificial intelligence (AI) search feature is facing criticism for providing erratic, inaccurate answers.

Its experimental "AI Overviews" tool has told some users searching for how to make cheese stick to pizza better that they could use "non-toxic glue".

The search engine's AI-generated responses have also said geologists recommend humans eat one rock per day.

A Google spokesperson told the BBC they were "isolated examples".

Some of the answers appeared to be based on Reddit comments or articles written by satirical site, The Onion.

They have been widely mocked on social media.

But Google insisted the feature was generally working well.

"The examples we've seen are generally very uncommon queries, and aren’t representative of most people’s experiences," it said in a statement.

"The vast majority of AI overviews provide high quality information, with links to dig deeper on the web."

It said it had taken action where "policy violations" were identified and was using them to refine its systems.

It is not the first time the company has run into problems with its AI-powered products.

In February, it was forced to pause its chatbot Gemini which was criticised for its "woke" responses.

Gemini's forerunner, Bard, also got off to a disastrous start.

Google began trialling AI overviews in search results for a small number of logged-in UK users in April, but launched the feature to all US users at its annual developer showcase in mid-May.

It works by using AI to provide a summary of search results, so users do not have to scroll through a long list of websites to find the information they are seeking.

It is billed as a product that "can take the legwork out of searching" though users are warned it is experimental.

However, it is likely to be widely used - and trusted - because Google search remains the go-to search engine for many.

According to web traffic tracker, Statcounter, Google's search engine accounts for more than 90% of the global market.

It is still fundamental to the way in which Google makes its money, and a service the firm needs to both protect and future-proof.

Many industry experts agree that more focused AI-driven search is the way forward - despite the power-hungry tech's environmental price tag.

Why wade through pages of search engine results and adverts to find information if a chatbot can give you a single, definitive answer?

But this only works if you can trust it.

So-called hallucinations by generative AI tools are not just a problem for Google, but as the world's largest search engine it gets more scrutiny.

In one baffling example, a reporter Googling whether they could use gasoline to cook spaghetti faster was told "no... but you can use gasoline to make a spicy spaghetti dish" and given a recipe.

We don’t know how many searches it got right (because they’re less funny to share on social media), but AI search clearly needs to be able to handle anything thrown at it, including the more leftfield.

Rival firms are facing a similar backlash over their attempts to cram more AI tools into their consumer-facing products.

The UK's data watchdog is looking into Microsoft after it announced a feature coming to its new range of AI-focused PCs that would take continuous screenshots of their online activity.

And ChatGPT-maker OpenAI was called out by Hollywood actress Scarlett Johansson for using a voice likened to her own, saying she turned down its request to voice the popular chatbot.

Advertisement