Meta exec and former U.K. Deputy Prime Minister compares AI fears to past ‘moral panic’ over video games—and bicycles

KENZO TRIBOUILLARD—AFP via Getty Images

A Meta exec has moved to quell public fears about the capabilities of AI, calling it a “moral panic” akin to past fears about everything from video games to the bicycle.

Meta’s president of global affairs Nick Clegg warned against premature calls for regulation of the technology, the Times of London and the Guardian reported, speaking ahead of the landmark AI Summit being hosted at Bletchley Park in the U.K. The summit is expected focus on mitigating the potential harms of AI.

Elon Musk will speak with U.K. Prime Minister Rishi Sunak on Musk’s X platform about regulating AI. Major world leaders, including European Commission President Ursula von der Leyen, will be in attendance.

The summit follows an executive order signed Monday by the Biden Administration, which will force tech companies to quickly develop strong safety standards for AI. Biden will not be attending the summit.

However, former U.K. Deputy Prime Minister Clegg will be one voice present in Bletchley Park seeking to downplay growing concerns about AI, from the technology’s potential to steal jobs to its ability to manipulate humans.

Clegg said there was a "Dutch auction" around the risks, with detractors trying to outdo each other with the most outlandish theories of AI going wrong.

“I remember the 80s. There was this moral panic about video games. There were moral panics about radio, the bicycle, the internet,” Clegg said at an event in London, the Times reported.

“Ten years ago we were being told that by now there would be no truck drivers left because all cars will be entirely automated. To my knowledge in the U.S., there’s now a shortage of truck drivers.”

AI’s risks

Clegg, who joined Meta following a nearly two-decade political career in the U.K., has been on a charm offensive supporting the development of AI. This approach has largely involved downplaying the technology’s risks, as well as its capabilities.

In July, Clegg told BBC’s Today radio program that large language models (LLM) like OpenAI’s ChatGPT and Google’s Bard were currently “quite stupid,” and fell “far short” of the level where they could develop autonomy.

In his role at Meta, Clegg is among a minority of tech execs unreservedly backing the potential for AI, pouring cold water on panic about the tech’s threats.

Meta made its own LLM, Llama 2, open source when it released it in July. The move was seen by proponents, including Meta, as one that would boost transparency and democratize information, preventing the tech from being gatekept by a few powerful companies.

However, detractors of the move worry that the information might be used by bad actors to proliferate AI’s harms. OpenAI went open source with its code when it first launched ChatGPT but soon backpedaled. The company’s co-founder Ilya Sutskever told The Verge in an interview that open-sourcing AI was “just not wise.” It might be a key discussion point at this week’s U.K. AI Summit.

Danger warnings

Other tech execs have been much more vocal about the wider risks of AI. In May, OpenAI co-founder Sam Altman penned a short letter alongside hundreds of other experts warning of the dangers of AI.

Musk and Apple co-founder Steve Wozniak were among 1,100 people who in March signed an open letter calling on a moratorium on the development of advanced AI systems.

However, Andrew Ng, one of the founding fathers of AI and co-founder of Google Brain, hinted that there might be ulterior motives behind tech companies’ warnings.

Ng taught OpenAI co-founder Sam Altman at Stanford and hinted his former student may be trying to consolidate an oligopoly of powerful tech companies controlling AI.

“Sam was one of my students at Stanford. He interned with me. I don’t want to talk about him specifically because I can’t read his mind, but… I feel like there are many large companies that would find it convenient to not have to compete with open-sourced large language models,” Ng said in an interview with the Australian Financial Review.

Ng warned proposed regulation of AI was likely to stifle innovation, and that no regulation was currently better than what was being proposed.

Geoffrey Hinton, a former Google engineer who quit to warn about AI’s dangers, questions Ng’s comments about an apparent conspiracy among tech companies to stifle competition.

“Andrew Ng is claiming that the idea that AI could make us extinct is a big-tech conspiracy. A datapoint that does not fit this conspiracy theory is that I left Google so that I could speak freely about the existential threat,” the so-called “Godfather of AI” posted on X, formerly Twitter.

This story was originally featured on Fortune.com

Advertisement