Google defends bizarre answers from ‘AI Overviews’ feature

 (Getty Images)
(Getty Images)

Google has defended its ‘AI Overviews’ feature, after users reported receiving bizarre responses.

The tool is intended to sit alongside traditional Google results, by using artificial intelligence to answer queries. The system takes data from the internet and uses that to craft responses, with Google claiming that it will make it easier for people to search.

But in recent days, Google users have reported that the system has encouraged them to eat rocks, make pizzas with glue, and has reiterated a false conspiracy theory that Barack Obama is Muslim.

Some of those responses appeared to have been taken from online results. The recommendation that a pizza topping would become more chewy by using glue, for instance, appears to have come from a joke posted on Reddit.

Now Google said those examples were for rare queries and claimed that the feature is working well overall.

“The examples we’ve seen are generally very uncommon queries, and aren’t representative of most people’s experiences,” a spokesperson said. “The vast majority of AI Overviews provide high quality information, with links to dig deeper on the web.

“We conducted extensive testing before launching this new experience to ensure AI overviews meet our high bar for quality. Where there have been violations of our policies, we’ve taken action – and we’re also using these isolated examples as we continue to refine our systems overall.”

The company said that it had added guardrails to its system with a view to stopping harmful content appearing, that it had subjected the system to an evaluation process and testing, that AI overviews were built to comply with its existing policies.

According to Google, it has also worked recently to improve the system to make it better at giving factual answers to responses.

The problems appeared to have come about in part because of the data that is used to inform the responses, which may include jokes or other content that becomes misleading when it is re-used in an answer. But part of the issue may also be the tendency to “hallucinate” by large language models such as those used by Google.

Because those large language models are trained using linguistic data, rather than facts, they have a tendency to give answers that may be worded convincingly but actually include falsehoods. Some experts have suggested that such problems are inherent in those systems.

Advertisement