In Is AI a bubble? Azeem sets out to answer that question. Spoiler: he says not.

In terms of defining a bubble, he says:

  • There is no academic consensus on the nature of a bubble
  • He defines it as "a 50% drawdown from the peak equity value that is sustained for at least 5 years"
  • Further on he says "Ultimately, it means a phase marked by a rapid escalation in prices and investment, where valuations drift materially away from the underlying prospects and realistic earnings power of the assets involved."
  • "Bubbles are impossible to diagnose in real time. Only in retrospect do we know whether exuberance was justified or delusional."

I take issue with the final point, that it is impossible to predict a bubble.  Previous bubbles were all successfully predicted by people living through them! Certainty in forecasting is of course impossible, but to make predictions based on your own judgement is surely reasonable?

The market is a distributed machine for setting prices. We have always used machines like this (or "systems" to be more accurate) for solving complex distributed choice questions. Democracy is another.

When the market works (the "Efficient Market Hypothesis") prices correctly represent the discounted value of future free cash flow. We have a collective delusion problem in mistaking the solution for the problem: "Something is worth whatever someone will pay for it" is a statement about the market being, definitionally, correct. This is a similar error to the common belief that democracy exists because somehow the "people are always right". Newsflash: they are not.

Bubbles are a bug in the market. They represent a period where prices are totally detached from a reasonable assessment of future free cash flow. In retrospect they are obvious to everyone. The weird thing about them is that in advance they are typically also obvious to everyone: but a significant portion of market actors pretend otherwise.

This is because markets routinely demonstrate the features of a Keynesian Beauty Contest. Participants don't value an asset by their genuine assessment of its underlying value (discounted value of future free cash flow) they value it by what they think other people will think it is worth, aka Greater fool theory

Often they are right! It is only when the music stops that we find out who the fools were and who made a fortune on the way up.

So how can you tell whether there is a bubble or not? Well, the market will not be able to tell you, because it's malfunctioning. This is literally how bubbles work.

Anyway, to determine whether there is a bubble or not, Azeem presents five "gauges":

  1. Economic Strain: investment as a share of GDP
  2. Industry Strain: ratio of capex to revenues
  3. Revenue Growth: revenue doubling time in years
  4. Valuation Heat: p/e ratio
  5. Funding Quality: "composite index capturing funding mix"

It's a decent look at some of the available data, but I don't think any of it helps.

Economic Strain, Industry Strain and Funding Quality are relevant to the size of a bubble, which is obviously important. Small asset bubbles happen all the time (look at meme stocks) but their significance is certainly determined by these sort of factors. These gauges though do not speak to the essential "bubbleness" - whether the assets are indeed overvalued.

Of the others, future revenue growth absolutely will tell you if this is a bubble or not, but we have no future data. Historic revenue growth tells us nothing. Azeem nods to this with:

And this is likely a conservative forecast. Citi estimates that model makers’ revenue will grow 483% in 2025. OpenAI forecasts annualized growth of about 73% to 2030, while analysts like Morgan Stanley estimate this market could be as large as $1 trillion by 2028, equivalent to compound growth of ~122% a year over the period.

Of course mad forecasts are a common feature of bubbles, so this probably is more indicative of a bubble than otherwise. It is not a reassuring data point.

p/e ratio only tells us whether the market is valuing current or future cashflow. If p/e is low then this definitely isn't a bubble (there is present cash flow), but a high p/e just tells us investors are betting on growth. This could be a bubble, but might not be.

So is it a bubble?

A useful framework for analysis in my view is to consider where you stand on the following three concepts:

  1. Your theory of cognition
  2. Your theory of growth
  3. Your theory of profit

If language models are genuinely performing cognition (or a sufficient facsimile thereof) and there is a path to usage growth and AI companies can claim the profit from it, then the current capital outlays are justified.

I don't think any of these is at all self-evident and they all look very shaky.

Theory of cognition

The bull case for language models is that they are doing something close enough to cognition to genuinely replace human activity in a broad range of tasks. This isn't a ridiculous proposition - they routinely do things which previously have only been possible with cognition. If it looks like a duck, swims like a duck, and quacks like a duck, then it probably is a duck.

In this case I do not think it is, in fact, a duck. They certainly show the extent to which language can contribute to cognition (hello Sapir-Worf once again), and cognition and culture are so intertwined the argument about where one ends and one begins might never really be finalised. But I do not think the Transformer Architecture itself does cognition.  We are further along the language model sigmoid curve than many hope.

Theory of growth

Maybe one billion people are already using language models weekly. This is an incredible achievement for a new technology. To justify the build out rate this will need to increase, clearly. The new data centres are there to serve new people, not just existing users who will use them more.

But what is the theory of growth? Why are these people not using language models already? They are trivially accessible and either free or very cheap. Everyone has heard of them. Clearly there has to be a theory that describes why another few billion would start using them.

It seems to be a bet that the chat interface is not the final form for language models - that there will be application use cases which mean they can be adopted for entirely new use cases.

I'm very sceptical about this one too.  SaaS developers are building AI into everything and the response so far is not good. Adoption of AI features in to software like Microsoft 365 has been poor so far. My personal experience, and that of those I've spoken to, is that the addition of AI features into common software packages is poorly received at best.

There are definitely some narrower use cases where the application data is structured in a way which doesn't translate easily into text or images (Miro is a decent example), but I am very sceptical that SaaS developers are going to find another two or three billion users for AI.

Theory of profit

Finally, can the big AI companies claim the profit from growth?

The investor bet seems to be that application level features will produce moats and (ideally) network effects. This means the AI labs can claim and hold users forever. OpenAI recently launched ChatGPT Pulse which is this sort of attempt.

This is even more questionable. Models are converging on capability, including open weights models such as Qwen and Llama. Marginal costs are high. This seems like an obvious market where benefits accrue to consumers eventually, and everyone else gets a commodity share.

To conclude

So if we're along the sigmoid towards converging capability, the market will grow but possibly not exponentially and there's little profit to be claimed, then this really does look like a bubble.

This has led to some pretty wild claims:

They rewrote this story after the tweet, since it was such obvious nonsense, but just to spell it out. To make the economy 10% bigger after 5 years would require 2% growth per year. The OBR is forecasting 1% growth so AI would need to triple the rate of productivity growth, immediately. Hopefully it is obvious that this is totally implausible.

Looks pretty bubbly to me.

Is AI a bubble?