AI will change the world. But that doesn’t mean investors will get rich in the process

Nathan Laine/Bloomberg via Getty Images

Hello and welcome to Eye on AI.

I’m still buzzing from last week’s electrifying Fortune Brainstorm AI conference in London. So many great insights and discussions. Thank you to the readers who attended. And if you weren’t able to make it, you can catch up here.

One of the key themes that emerged at the conference is that while many businesses have been experimenting with generative AI applications, using all kinds of models and methods, relatively few have put generative AI into full production at scale in a business-critical application. Concerns about reliability and, even more prominently, cost and return on investment, continue to hold back full deployment.

There definitely seem to be some signs that the hype around AI is starting to deflate, and that we are perhaps sliding into the “trough of disillusionment” phase of this technology’s development cycle. Last week’s declines in the share prices of several prominent tech companies may be evidence of this. And later this week, all eyes will be on Microsoft and Alphabet’s quarterly earnings reports.

I, for one, remain convinced that this technology is real and will have a massive impact on how we work—and live—over the coming years. But that is not the same thing as saying that the companies at the forefront of the AI boom, or their investors, will be successful financially.

Last week, Air Street Capital, the London venture capital firm that is run by Nathan Benaich, who has emerged as one of the savviest early-stage investors in AI, published a provocative blog post arguing that the market dynamics for those building AI foundation models at least are looking particularly unpalatable. It’s a sharply reasoned analysis and well-worth reading for anyone interested in whether there is a sustainable business in selling foundational AI technology. Benaich and his colleague Alex Chalmers write “the economics of large AI models don’t currently work.”

The problem? The cost of both training and inference (actually running large AI models on GPUs) is too steep. This means that the operating margins for those offering access to these models through an API (OpenAI, Cohere, Anthropic) are lower than for other software firms, and overall profit margins are likely negative when capital expenditures are considered. (Google and Microsoft are also mostly in this camp, but for them, the models are either underpinning features in other software or serving as loss leaders for cloud computing services—so the business model is slightly different.) Making matters worse, open-source models that are being offered for free are gaining ground on the proprietary models. “We’re slowly entering into a capex intensive arms race to produce progressively bigger models with ever smaller relative performance advantages,” Benaich and Chalmers write.

They also write that the fact that the plethora of LLMs with relatively close capabilities is turning AI into a commodity business, where the AI startups engage in “a competition to raise as much money as possible from deep-pocketed big tech companies and investors to, in turn, incinerate it in pursuit of market and mind share.”

The duo draws an analogy between the companies building large foundation models and another industry that is highly capital intensive, where products are not highly differentiated, and that has also engaged in periodic price wars, destroying value for investors: airlines. It’s an interesting analogy in that the technology of global air travel was very real and definitively reshaped how we work and live. Air travel helped make our modern world. But that didn’t mean anyone could make any money at it. (Another good example is the buildout of the railways in the 19th century; again the tech transformed economies and nations but left a trail of bankrupt railroad companies in its wake.)

Benaich and Chalmers say that in such value-destroying industries, there is usually consolidation but that regulators may not allow that to happen with AI startups given that the most likely agents of consolidation are Big Tech companies that are already under intense antitrust scrutiny.

So where does this leave the AI industry? Well, they argue much smaller and less expensive models, used with fine-tuning and much longer context windows—meaning the model can ingest much longer prompts, including specific documents to analyze or summarize—will turn out to be sufficient for what many companies need to power AI applications. These small models can be run on less capable, older-generation GPUs. They might be served up on devices (on laptops or desktops, perhaps even mobile phones), which means that Nvidia GPUs won’t be in such high demand. (If one wanted another bullet point to add to this part of their argument here, one need only look at Microsoft’s Phi 3 announcement today which I cover further down in the Brain Food section of the newsletter.)

Benaich and Chalmers suggest the market will bifurcate: A few large companies that need the added capabilities of the largest foundation models will be willing and able to pay for them. That may allow a couple of proprietary model purveyors as well as the hyperscalers such as Alphabet, Microsoft, and Amazon, to still earn a modest profit. (But it may be a much smaller business than these cloud giants hope.) The two investors also imply early on in their blog that companies building AI applications that are not general purpose but instead highly tailored to a particular industry and the business needs of that sector will likely turn out to be better investments.

I am not entirely sure things will work out exactly as the Air Street guys layout. For one thing, in many cases, a few extra points of accuracy in a model's reasoning ability on a benchmark, which may not seem like much, can make a huge difference to what a business can do with a model. It may be that the smaller models look good and are cheap but don’t cross a threshold of usefulness and reliability in deployment that will enable companies to avoid paying for the larger proprietary models. But it is certainly worth considering their bearish case for AI investors (and for Nvidia shareholders).

What do you think? And what’s the best analogy to the current AI industry dynamics? Is it airlines or railroads or something else entirely?

With that, here’s the AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

The news, research, and Fortune on AI sections of today's newsletter were curated by Fortune's Sharon Goldman.

This story was originally featured on Fortune.com

Advertisement