OpenAI's Sam Altman says an international agency should monitor the 'most powerful' AI to ensure 'reasonable safety'

Sam Altman
Sam Altman thinks an international agency can help regulate AI.Andrew Caballero-Reynolds/Getty Images
  • OpenAI CEO Sam Altman wants an international agency to regulate artificial intelligence.

  • Altman said an agency approach would be better than inflexible laws given AI's rapid evolution.

  • He compared AI to airplanes, emphasizing the need for a safety testing framework.

OpenAI CEO Sam Altman says he's keen on regulating AI with an international agency.

"I think there will come a time in the not-so-distant future, like we're not talking decades and decades from now, where frontier AI systems are capable of causing significant global harm," Altman said on the All-In podcast on Friday.

He believes those systems will have "negative impact way beyond the realm of one country" and wants to see them regulated by "an international agency looking at the most powerful systems and ensuring reasonable safety testing."

In Altman's view, landing on the appropriate level of oversight will be a balancing act.

"I'd be super nervous about regulatory overreach here. I think we get this wrong by doing way too much or a little too much. I think we can get this wrong by doing not enough," he said.

Legislation to regulate the fast-changing technology is already underway.

In March, the EU approved the Artificial Intelligence Act, which will categorize AI risk and ban unacceptable use cases. President Joe Biden also signed an executive order last year calling for greater transparency from the world's biggest AI models. And this year the state of California has been leading the charge on regulating AI as lawmakers consider more than 30 bills, according to Bloomberg.

But Altman argued that an international agency would offer more flexibility than national legislation — and that's important given how quickly AI evolves.

"The reason I've pushed for an agency-based approach for kind of like the big-picture stuff and not like a write-it-in-law is in 12 months it will all be written wrong," he said. He thinks that lawmakers, even if they're "true world experts," probably can't write policies that will appropriately regulate events 12 to 24 months from now.

In simple terms, Altman thinks AI should be regulated like an airplane.

"When like significant loss of human life is a serious possibility, like airplanes, or any number of other examples where I think we're happy to have some sort of testing framework," he said. "I don't think about an airplane when I get on it. I just assume it's going to be safe."

Read the original article on Business Insider

Advertisement