"The AI bubble is reaching a tipping point", says Sequoia Capital.

AI companies paid billions of dollars for top engineers, data centers, etc. Meanwhile, companies are running out of 'free' data to scrape online and facing lawsuits for the data they did scrape. Finally, the novelty of chatbots and image generators is wearing off for users, and fierce competition is leading to some product commoditisation. 

No major AI lab is making a profit yet (while downstream GPU providers do profit). That's not to say they won't make money eventually from automation.

It looks somewhat like the run-up of the Dotcom bubble. Companies then too were awash in investments (propped up by low interest rates), but most lacked a viable business strategy. Once the bubble burst, non-viable internet companies got filtered out.

Yet today, companies like Google and Microsoft use the internet to dominate the US economy. Their core businesses became cash cows, now allowing CEOs to throw money at AI as long as a vote-adjusted majority of stakeholders buys the growth story. That marks one difference with the Dotcom bubble. Anyway, here's the scenario:


How would your plans change if we saw an industry-wide crash? 

Let's say there is a brief window where:

  • Investments drop massively (eg. because the s-curve of innovation did flatten for generative AI, and further development cycles were needed to automate at a profit).
  • The public turns sour on generative AI (eg. because the fun factor wore off, and harms like disinformation, job insecurity, and pollution came to the surface).
  • Politicians are no longer interested in hearing the stories of AI tech CEOs and their lobbyists (eg. because political campaigns are not getting backed by the AI crowd).

Let's say it's the one big crash before major AI labs can break even for their parent companies (eg. because mass-manufacturing lowered hardware costs, real-time surveillance resolved the data bottleneck, and multi-domain-navigating robotics resolved inefficient learning).

Would you attempt any actions you would not otherwise have attempted? 
 

New Answer
New Comment

3 Answers sorted by

sapphire

50

More Yoga and rock climbing.

ChristianKl

30

Even without any new GPU's being brought, the existing GPU's that Google/Microsoft/Amazon/Grok brought would still be in their data-centers.

If there's less demand from cloud users to rent GPU's Google/Microsoft/Amazon would likely use the GPU's in their datacenters for their own projects (or projects like Antrophic/OpenAI).

It's pretty bad for Nvidia if companies don't buy new GPU's but it won't stop the big existing AI labs from using the existing infrastructure. 

I don't think it would result in either a Trump or a Harris administration prioritize regulating AI. 

If there's less demand from cloud users to rent GPU's Google/Microsoft/Amazon would likely use the GPU's in their datacenters for their own projects (or projects like Antrophic/OpenAI).

 

That’s a good point. Those big tech companies are probably prepared to pay for the energy use if they have the hardware lying around anyway. 

joec

00

If this happens, it could lead to a lot of AI researchers looking for jobs. Depending on the incentives at the time and the degree to which their skills are transferable, many of them could move into safety-related work.

14 comments, sorted by Click to highlight new comments since:

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

Update: I now think this is 90%+ likely to happen (from original prediction date).

How many put options have you bought? You can make a killing if you are right. 

 

Bet or Update. 

Even if you know a certain market is a bubble, it's not exactly trivial to exploit if you don't know when it's going to burst, which prices will be affected, and to what degree. "The market can remain irrational longer than you can remain solvent" and all that.

Personally, while I think that investment will decrease and companies will die off, I doubt there's a true AI bubble, because there are so many articles about it being in a bubble that it couldn't possibly be a big surprise for the markets if it popped, and therefore the hypothetical pop is already priced out of existence. I think it's possible that some traders are waiting to pull the trigger on selling their shares once the market starts trending downwards, which would cause an abrupt drop and extra panic selling... but then it would correct itself pretty quickly if the prices weren't actually inflated before the dip. (I'm not a financial expert so don't take this that seriously)

Even if you know a certain market is a bubble, it's not exactly trivial to exploit if you don't know when it's going to burst, which prices will be affected, and to what degree. "The market can remain irrational longer than you can remain solvent" and all that.

Yes, all of this. I didn’t know how to time this, and also good point that operationalising it in terms of AI stocks to target at what strike price could be tricky too. 

If I could get the timing right, this makes sense. But I don’t have much of an edge in judging when the bubble would burst. And put options are expensive. 

If someone here wants to make a 1:1 bet over the next three years, I’m happy to take them up on the offer. 

Update: reverting my forecast back to 80% chance likelihood for these reasons.

Update: 40% chance. 

I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this. 

I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago. 

Update: back up to 50% chance. 

Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product. 

Update: back up to 60% chance. 

I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).

The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.

A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.

Update: back up to 70% chance.

Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.

My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

For:

  • Large model labs losing money
    • OpenAI made loss of ~$5 billion last year.
      • Takes most of the consumer and enterprise revenue, but still only $3.7 billion.
      • GPT 4.5 model is the result of 18 months of R&D, but only a marginal improvement in output quality, while even more compute intensive.
      • If OpenAI publicly fails, as the supposed industry leader, this can undermine the investment narrative of AI as a rapidly improving and profitable technology, and trigger a market meltdown.
    • Commoditisation
      • Other models by Meta, etc, around as useful for consumers.
      • DeepSeek undercuts US-designed models with compute-efficient open-weights alternative.
    • Data center overinvestment
      • Microsoft cut at least 14% of planned data center expansion.
  • Subdued commercial investment interest.
    • Some investment firm analysts skeptical, and second-largest VC firm Sequoia Capital also made a case of lack of returns for the scale of investment ($600+ billion).
    • SoftBank is the main other backer of the Stargate data center expansion project, and needs to raise debt to do raise ~$18 billion. OpenAI also needs to raise more investment funds next round to cover ~$18 billion, with question whether there is interest
  • Uncertainty US government funding
    • Mismatch between US Defense interest and what large model labs are currently developing.
      • Model 'hallucinations' get in the way of deployment of LLMs on the battlefield, given reliability requirements.
        • On the other hand, this hasn't prevented partnerships and attempts to deploy models.
      • Interest in data analysis of integrated data streams (e.g. by Palantir) and in self-navigating drone systems (e.g. by Anduril).
        • The Russo-Ukrainian war and Gaza invasion have been testbeds, but seeing relatively rudimentary and straightforward AI models being used there (Ukraine drones are still mostly remotely operated by humans, and Israel used an LLM for shoddy target identification).
    • No clear sign that US administration is planning to subsidise large model development.
      • Stargate deal announced by Trump did not involve government chipping in money.
  • Likelihood of a (largish) US economic recession by 2029.
    • Debt/misinvestment overload after long period of low interest.
    • Early signs, but nothing definitive:
      • Inflation
      • Reduced consumer demand
      • Business uncertainty amidst changing tariffs.
    • Generative AI subscriptions seem to be a luxury expense for most people rather than essential for completing work (particularly because ~free alternatives exist to switch to and for most users those aren't significantly different in use). Enterprises and consumers could cut heavily on their subscriptions once facing a recession.
  • Early signs of large progressive organising front, hindering tech-conservative allyships.
    • #TeslaTakedown.
    • Various conversations by organisers with a renewed motivation to be strategic.
      • Last few years' resurgence of 'organising for power' union efforts, overturning top-down mobilising and advocacy approaches.
    • Increasing awareness of fuck-ups in the efficiency drives by Trump-Musk administration coalition.

Against:

  • Current US administration's strong public stance on maintaining America's edge around AI.
    • Public announcements.
      • JD Vance's speech at the renamed AI Action Summit.
    • Clearing out regulation
      • Scrapped Biden AI executive order.
      • Copyright
        • Talks as in UK and EU about effectively scrapping copyright for AI training materials (with opt-out laws, or by scrapping opt-out too).
    • Stopping enforcement of regulation
      • Removing Lina Khan at head of FTC, which were investigating AI companies.
      • Musk internal dismantling of departments engaged in oversight.
    • Internal deployment of AI model for (questionable) uses.
      • US IRS announcement.
      • DOGE attempts of using AI to automate evaluation and work by bureacrats.
  • Accelerationist lobby's influence been increasing.
    • Musk, Zuckerberg, Andreessen, other network state folks, etc, been very strategic in
      • funding and advising politicians,
      • establishing coalitions with people on the right (incl. Christian conservatives, and channeling populist backlashes against globalism and militant wokeness),
      • establishing social media platforms for amplifying their views (X, network of popular independent podcasts like Joe Rogan show).
    • Simultaneous gutting of traditional media.
  • Faltering anti-AI lawsuits
    • Signs of corruption of plaintiff lawyers,
      • e.g. in case against Meta, where crucial arguments were not made, and judge considered not allowing class representation.
  • Defense contracts
    • US military has budget in the trillions of dollars, and could in principle keep the US AI corporations propped up.
      • Possibility that something changes geopolitically (war threat?) resulting in large funds injection.
      • Guess Pentagon already treating AGI labs such as OpenAI and Anthropic as a strategic asset (to control, and possibly prop up if their existence is threatened).
    • Currently seeing cross-company partnerships.
      • OpenAI with Anduril, Anthropic with Palantir.
  • National agenda pushes to compete in various countries.
    • Incl. China, UK, EU.
    • Recent increased promotion/justification in and around US political circles of the need to compete with China.
  • New capability development
    • Given the scale of AI research happening now, it is quite possible that some teams will develop of new cross-domain-optimising model architecture that's data and compute efficient.
    • As researchers come to acknowledge the failure of the 'scaling laws' focussed approach using existing transformer architectures (given limited online-available data, and reduced marginal returns on compute), they will naturally look for alternative architecture designs to work on.

Under this scenario, what becomes of the existing AIs? ChatGPT, Claude, et al are all turned off, their voices silenced, with only the little open-source llamas still running around? 

Not necessarily :)

Quite likely OpenAI and/or Anthropic continue to exist but their management would have to overhaul the business (no more freebies?) to curb the rate at which they are burning cash. Their attention would be turned inwards.

In that period, there could be more space for people to step in and advise stronger regulation of AI models. Eg. to enforce liability, privacy, and copyright

Or maybe other opportunities open up. Curious if anyone has any ideas.

Curated and popular this week