Big Tech is pouring hundreds of billions into AI. Should it also get to decide if the technology is ‘safe’?

Stephen Brashear—Getty Images

Hello and welcome to Eye on AI!

Google, Microsoft, and Meta’s earnings reports last week put a spotlight on the hundreds of billions of dollars Big Tech will pour into AI by the end of 2024.

In this quarter alone, Google said its capital expenditures were $12 billion, nearly double the amount from a year earlier, driven by massive investment in AI infrastructure including servers and data centers. Meanwhile, Microsoft is reportedly increasing its spending faster than its revenue, but it still doesn’t have enough data center infrastructure to deploy and run its AI models. And Meta’s investors did not react well to the news that the company would spend billions more than investors had expected on AI investments—which CEO Mark Zuckerberg insisted would yield rewards further down the line.

Oh, and let’s not forget Amazon, which just invested billions in AI startup Anthropic and plans to spend $150 billion on AI data centers. And many deep-pocketed startups like OpenAI, Anthropic, and all of Elon Musk’s companies are also pouring money into the race (Musk recently posted on X that any company that isn't spending $10 billion on AI this year won't be able to compete).

But Big Tech’s outsized AI spending habits put an interesting spin on another piece of AI news from last week. The U.S. Department of Homeland Security announced the Artificial Intelligence Safety and Security Board, which will advise it on protecting critical infrastructure—from power grids and internet service to airports—from potential AI threats. The 22-member board, required by President Joe Biden’s AI Executive Order announced in November 2023, is heavy with CEOs from the same deep-pocketed companies and startups powering today’s AI boom: Google’s Sundar Pichai; Microsoft’s Satya Nadella; Nvidia’s Jensen Huang; OpenAI’s Sam Altman; Dario Amodei from Anthropic; and the CEOs of Amazon Web Services, AMD, IBM, and Adobe.

There were immediate criticisms of the board’s makeup, which notably does not include any significant open-source AI representation—that is, companies whose AI models are freely available (either fully or partly, depending on the license) so anyone can modify, personalize, and distribute them without restrictions. Interestingly, those absent include two companies with the deepest pockets: Meta, whose Llama family of models is released partly open (“We were snubbed,” posted Meta’s chief AI scientist Yann LeCun on X) and Musk’s xAI, whose Grok-1 model was released with an open-source license; Musk is also suing OpenAI for its lack of open-source models. But open-source advocates such as Hugging Face and Databricks are also missing.

In an era in which the power to shape AI may ultimately be concentrated in the hands of the wealthiest tech companies, the question is: Who gets to decide whether and what kinds of AI systems are safe and secure? Can (and should) these companies be a part of regulating an industry where they have clear vested interests?

Some, like AI researcher Timnit Gebru, say no: “Foxes guarding the hen house is an understatement,” she posted on X. But Alejandro Mayorkas, the Secretary of Homeland Security, told the Wall Street Journal that he was unconcerned that the board’s membership included many Big Tech execs. “They understand the mission of this board,” he said. “This is not a mission that is about business development.”

Of course, a board dedicated to deploying AI within America’s critical infrastructure does need input from the companies that will be deploying it—which obviously includes hyperscalers like Google and Microsoft, as well as AI model leaders like OpenAI. But the debate between those who believe that Big Tech wants to snuff out AI competition and those who think AI regulation should limit open-source AI is also not new: It has been hot and heavy ever since OpenAI’sAltman testified before Congress in June 2023, urging AI regulation—which my colleague Jeremy Kahn wisely said would be “definitely good for OpenAI” while others called his lobbying “a masterclass in wooing policy makers.”

In November 2023, the Washington Post reported that a growing group of venture capitalists, CEOs of mid-sized software companies, and open-source proponents are pushing back. They argue the biggest AI players simply want to lock in their advantages with rules and regulations like Biden’s executive order, which lays out a plan for government testing and approval guidelines for AI models.

And if the U.K. is any example, the group has valid concerns about BigTech’s willingness to be transparent. Politico reported yesterday that while last year Altman and Musk agreed to share their companies’ AI models with the British government, as part of Prime Minister Rishi Sunak’s new AI Safety Institute, they so far have not. For example, the report claimed that neither OpenAI nor Meta has given the U.K.'s AI Safety Institute access to do pre-release testing—showing the limits of voluntary commitments.

However, with Congressional AI regulations showing few signs of progress, many leaders consider any move towards tackling the “responsible development and deployment of AI” to be a step in the right direction. On the other hand, open-source AI is not going anywhere—so it seems clear that its leaders and proponents will ultimately have to be part of the plan.

With that, here’s the AI news.

Sharon Goldman
sharon.goldman@fortune.com

This story was originally featured on Fortune.com

Advertisement