The legal premise of Elon Musk’s OpenAI lawsuit is weak. But the questions it raises are not

Grzegorz Wajda—SOPA Images/LightRocket via Getty Images

Hello and welcome to Eye on AI.

The most interesting AI news of the past several days was undoubtedly Elon Musk’s lawsuit against OpenAI. Musk contends that OpenAI—and specifically Sam Altman and Greg Brockman, who cofounded the organization with Musk—violated its founding agreement and charter.

A central thrust of Musk's suit seems self-evident. OpenAI was founded as a nonprofit lab that pledged to keep superpowerful “artificial general intelligence”—defined as AI software that could perform most economically valuable cognitive tasks as well or better than a person—out of the hands of corporate control. Any AGI it created was supposed to be “for the good of humanity.” OpenAI initially committed to publishing all of its research and open-sourcing all of its AI models. And for a while, it did exactly that.

Fast forward to today: OpenAI operates a for-profit arm valued at $80 billion in a recent funding round and is largely in the orbit of and highly dependent on a single giant tech corporation, Microsoft. It no longer publishes critical details of its most powerful models or gives them away for free. Instead, these models are available only to paying customers through a closed API. That OpenAI no longer resembles anything like the organization it was set up to be seems indisputable.

But whether Musk can successfully turn what amounts to a charge of hypocrisy into a winning court case is an entirely different matter. Remember, frustrated with what he saw as OpenAI’s inability to catch up with Google’s DeepMind, Musk had proposed in 2018 that he bring OpenAI under his own direct personal control. It was the refusal of the rest of OpenAI’s nonprofit board and the OpenAI staff to go along with this plan that resulted in Musk resigning from OpenAI’s board in 2018 and reneging on a pledge to deliver $1 billion in funding to the nonprofit lab. This withdrawal of support prompted Altman to seek commercial backers for the lab.

Most of the changes Musk objects to were instituted by Altman and approved by OpenAI’s board after Musk departed. Now Musk is trying to claim that a loose set of discussions he had with Altman and Brockman when first setting up OpenAI—many of which are apparently not fully documented—constituted a “Founding Agreement” that should have taken precedence over later actions that OpenAI’s leadership and board made.

As many legal scholars have pointed out, this is a highly unusual case, and likely a weak one. It’s not at all clear the court will find that the discussions Musk had with Altman and Brockman, and which he is trying to argue constituted a binding contract in his lawsuit, will actually be interpreted as such. Andrew Stoltmann, a securities lawyer and adjunct professor at Northwestern University law school, told Bloomberg that these kinds of discussions are called “illusory promises” and generally are not legally enforceable. Noah Feldman, a Harvard University legal scholar who has advised OpenAI rival Anthropic, told the New York Times the supposed contract contains “a hole you can drive a truck through” and that much of the language in OpenAI’s charter is also vague enough that OpenAI can easily argue it is adhering to it. What’s more, if there is no contract, then Musk probably doesn’t have standing to sue and his case could well be thrown out of court. Even if OpenAI’s board has violated its own charter, in many states only the state attorney general can bring a legal action over such a matter.

If this is such a weak case, why bring it at all? Well, Musk loves to pick a fight and he has a history of getting entangled in contentious lawsuits, and sometimes prevailing. As one of the world’s richest people, he can afford to roll the legal dice. And the billionaire has made no secret that he feels betrayed by Altman. Musk is also running a rival AI startup, xAI, and has a rival chatbot to ChatGPT, Grok. So if his lawsuit can take out a rival, or at least distract them and cost them some cash that they otherwise might be spending to out-compete him, why not?

It's also likely he's hoping that the case won’t get thrown out of court quickly and will proceed to discovery. That process could surface all kinds of emails and text messages from Altman and Brockman that would likely enter the public record and could prove embarrassing to the OpenAI execs. That hope, in fact, may be the entire point of the lawsuit.

Another interesting aspect of Musk’s suit is his claim that OpenAI’s GPT-4 model is itself AGI. Under the terms of OpenAI’s charter, OpenAI’s nonprofit board has the sole discretion to determine when AGI has been achieved. But Musk claims the board has failed in its duty to do so. Also, any system constituting AGI is not supposed to be commercialized by Microsoft under the terms of OpenAI’s strategic partnership with the tech giant. But Musk contends that OpenAI has given Microsoft AGI by sharing GPT-4 with it.

Few people agree with Musk’s contention that GPT-4 is AGI. I certainly don’t. But the suit does perhaps helpfully focus attention on how fraught and ill-defined a concept AGI is. Scientists can’t agree on what human intelligence is. So defining artificial general intelligence is tricky. In a recent paper, DeepMind researchers tried to present AGI as not a single thing, but a spectrum of capabilities, and argued that there might be “levels of AGI" depending on how good an AI system is at each of these different capabilities.

That said, OpenAI’s charter had one very specific definition: software that can do most economically valuable cognitive tasks as well as people. Yet even this raises more questions than it answers: What constitutes most? Which people are we talking about? Are we talking about an average person or an expert in a particular field? And by what benchmark do we judge whether the AI can match humans at a particular cognitive task? Right now, GPT-4 seems to score better than most human test takers at a number of professional exams, such as the bar exam, medical licensing exams, and tough software coding challenges.

But while these tests are designed to assess professional knowledge, it’s pretty clear they are imperfect proxies. A lawyer can pass the bar and still not be a great lawyer. GPT-4 can pass the medical licensing board and yet a doctor who scored less well might be much better at diagnosing and treating people. This brings us to the “economically valuable” part of OpenAI’s AGI definition. Right now, it's clear that GPT-4 can assist a lot of knowledge workers with many tasks. But it cannot really do the entire job of most workers.

At the same time, it's also evident that even today’s most powerful AI software performs a lot worse than the average human at many critical tasks. One of the most important of these is the ability to tell fact from fiction. Also, the visual understanding of the most powerful AI models is still much weaker than that of most humans. Today’s AI systems don’t seem to actually have a great grip on physics, despite training on vast video libraries. They struggle to sort causation from correlation and to understand compositionality—which is basically how to tell a whole from its parts, and an understanding of which parts give the whole a particular meaning. Children tend to grasp most of these things much better than today’s most advanced AI.

If the only outcome of Musk’s lawsuit is to force us towards a better definition of intelligence and AGI, and better benchmarks for assessing both, it may well have been worth it.

There’s plenty more AI news to discuss below.

But first, do you want to learn more about how your company can harness the power of generative AI to supercharge your workforce and your bottom line, while also navigating regulation and avoiding the technology’s many pitfalls? Of course you do! So come and join me and a fantastic lineup of thinkers and doers from the worlds of technology, big business, government, entertainment, and more at Fortune’s first-ever Brainstorm AI conference in London on April 15 and 16. Our confirmed speakers include investor and entrepreneur Ian Hogarth, who also chairs the U.K. AI Safety Institute; Jaime Teevan, the chief scientist and technical fellow at Microsoft; Zoubin Ghahramani, vice president of research at Google DeepMind; Sachin Dev Duggal, founder of Builder.ai, and Paula Goldman, the chief ethical and humane use officer at Salesforce; Balbir Bakshi, the chief risk officer at the London Stock Exchange; Conor Leahy, the founder and CEO of Conjecture; and many more. You can register your interest in attending here: brainstormAI@fortune.com (and if you mention you are an Eye on AI reader, you may qualify for a discount).

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Correction: A news item in last week's edition (Feb. 27) misidentified Nat Friedman as GitHub's CEO. He is the former CEO.

This story was originally featured on Fortune.com

Advertisement