Ex-Google CEO Eric Schmidt has an easy solution to the terrifying idea of AI with free will

Updated
Eric Schmidt speaking with a mic
Former Google CEO Eric Schmidt invests in a number of AI and science startups. Eugene Gologursky/Getty
  • At VivaTech in Paris, Eric Schmidt shared some unsettling predictions about the dangers of AI.

  • The former Google CEO said if computers developed free will, "we're going to unplug them."

  • He added that the dangers of cyber- and biological attacks would be here in three to five years.

Eric Schmidt made some unsettling predictions Wednesday about AI while speaking at the annual VivaTech conference in Paris.

Since leaving Google, the former CEO has invested in a number of artificial-intelligence startups, and he's said that any AI regulation should strike a balance to ensure it doesn't stifle innovation.

Schmidt acknowledged that the development of AI posed dangers but said the biggest threats hadn't arrived yet. If and when those threats do materialize, Schmidt seems to think the world will have a way to deal with it.

"By the way, do you know what we're going to do when computers have free will?" Schmidt said at the conference. "We're going to unplug them."

"Let's see who unplugs who," Yoav Shoham, AI21 Labs' cofounder and co-CEO who spoke at the event with Schmidt, replied.

Yes, the thought of racing to unplug AI systems once they've gained free will — and catching that in time if it were to happen — isn't exactly the most comforting thought experiment. But Schmidt said researchers had conducted detailed assessments of the dangers of AI and that "the answer is: You can see the danger coming."

It's worth noting that the former Google CEO has invested in efforts to combat AI risks. Schmidt partnered with OpenAI to launch a $10 million grant program to support technical research with the company's Superalignment team, which was dedicated to managing risks associated with AI. Despite the team's disbandment last week, OpenAI plans to continue to move forward with the grant program, a spokesperson told Business Insider.

At the moment, Schmidt said the current form of AI wasn't that dangerous — except for disinformation, which is "out of control" and poses a "real issue for democracies," he added.

Disinformation has become an even bigger issue in the past couple of years as AI has emerged. Recent research on Meta and OpenAI systems indicated that various AI systems had learned to systematically induce "false beliefs in others to accomplish some outcome other than the truth."

Deepfakes have also become a larger problem, with AI-generated porn depicting public figures and impersonations of political leaders. People have reported AI-generated calls faking messages from President Joe Biden. In 2022, fraudsters pleaded guilty to charges of using targeted robocalls to dissuade voters from using mail-in ballots.

Schmidt said that the real dangers of large language models were cyber- and biological attacks, which aren't yet here. But "they're coming in three to five years," he said.

Schmidt did not immediately respond to a request for comment.

Read the original article on Business Insider

Advertisement