Europe’s quest to lead in AI regulation is in serious doubt

Getty Images

The European Union would like to do for AI what it did for online privacy: create gold-standard, globally influential regulation. But its planned AI Act is faltering in the home stretch.

EU leaders were hoping the bill might be finalized next week at a behind-closed-doors “trilogue” session involving negotiators from the bloc’s big institutions. However, the three biggest EU economies—Germany, France, and Italy—threw the whole thing into disarray last week by unexpectedly rejecting the push (by the European Parliament and by other EU countries) to have the AI Act regulate foundation models. Instead, the trio said they wanted foundation model providers such as OpenAI to self-regulate, with stricter rules applying to those who provide “high-risk” applications that tap into that underlying technology (so, for example, OpenAI’s popular ChatGPT app would be covered, but GPT-4, the underlying model that powers ChatGPT, would not).

In case you’re marveling at the spectacle of Germany and France being nice to U.S. tech firms for once, their motivations are likely a lot closer to home.

Germany’s government is an enthusiastic champion of local AI outfit Aleph Alpha, which has received funding from national titans such as SAP and Bosch. It also can’t hurt French AI sensation Mistral that its lobbying efforts are being led by cofounder Cédric O, a close ally of Emmanuel Macron who was, until last year, his digital economy minister. “I feel strongly that former officeholders should not engage in political activities related to their former portfolio,” sputtered Max Tegmark, the Swedish-American president of the pro-AI-safety Future of Life Institute, in an X argument with O (hey, I didn’t name them) yesterday.

European industry has also lobbied hard against the regulation of foundation models, while its creative sector has taken the opposing stance, largely because it wants foundation model providers to be transparent about the provenance of their training data.

Whatever lies behind the Germany-France-Italy U-turn, the European Parliament is not impressed. “We cannot accept [their] proposal on foundation models,” said Axel Voss, a key German member of the Parliament, in a post last week. “Also, even minimum standards for self-regulation would need to cover transparency, cybersecurity, and information obligations—which is exactly what we ask for in the AI Act. We cannot close our eyes to the risks.”

Again, this law was supposed to be wrapped up at a final trilogue session on Dec. 6. The European Commission, which made the initial proposal for the AI Act, has now come up with a compromise text that avoids reference to “foundation models” while obliging the makers of particularly powerful “general-purpose AI models” (i.e., foundation models) to at least document them and submit to having them officially monitored. That’s weak sauce compared with what Parliament wants, so the chances of this being pushed through to next year are pretty high.

The problem is that 2024 will be a bad time for well-considered legislation because there will be European Parliament elections in June, after which there will be a new Parliament and a new Commission. So there really isn’t much time to find a compromise here.

“A failure of the ‘AI Act’ project would probably be a bitter blow for everyone involved, as the EU has long seen itself as a global pioneer with its plans to regulate artificial intelligence,” wrote Benedikt Kohn of the law firm Taylor Wessing in a blog post today that noted how the U.S. has recently taken meaningful steps toward AI regulation.

More news below—though if you want to read more about evolving AI rules, a bunch of countries including the U.S., the U.K., and Germany just released a set of cybersecurity guidelines for companies building AI applications.

David Meyer

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

This story was originally featured on Fortune.com

Advertisement