"The AI bubble is reaching a tipping point", says Sequoia Capital.

AI companies paid billions of dollars for top engineers, data centers, etc. Meanwhile, companies are running out of 'free' data to scrape online and facing lawsuits for the data they did scrape. Finally, the novelty of chatbots and image generators is wearing off for users, and fierce competition is leading to some product commoditisation. 

No major AI lab is making a profit yet (while downstream GPU providers do profit). That's not to say they won't make money eventually from automation.

It looks somewhat like the run-up of the Dotcom bubble. Companies then too were awash in investments (propped up by low interest rates), but most lacked a viable business strategy. Once the bubble burst, non-viable internet companies got filtered out.

Yet today, companies like Google and Microsoft use the internet to dominate the US economy. Their core businesses became cash cows, now allowing CEOs to throw money at AI as long as a vote-adjusted majority of stakeholders buys the growth story. That marks one difference with the Dotcom bubble. Anyway, here's the scenario:


How would your plans change if we saw an industry-wide crash? 

Let's say there is a brief window where:

  • Investments drop massively (eg. because the s-curve of innovation did flatten for generative AI, and further development cycles were needed to automate at a profit).
  • The public turns sour on generative AI (eg. because the fun factor wore off, and harms like disinformation, job insecurity, and pollution came to the surface).
  • Politicians are no longer interested in hearing the stories of AI tech CEOs and their lobbyists (eg. because political campaigns are not getting backed by the AI crowd).

Let's say it's the one big crash before major AI labs can break even for their parent companies (eg. because mass-manufacturing lowered hardware costs, real-time surveillance resolved the data bottleneck, and multi-domain-navigating robotics resolved inefficient learning).

Would you attempt any actions you would not otherwise have attempted? 
 

New Answer
New Comment

1 Answers sorted by

joec

00

If this happens, it could lead to a lot of AI researchers looking for jobs. Depending on the incentives at the time and the degree to which their skills are transferable, many of them could move into safety-related work.

4 comments, sorted by Click to highlight new comments since:

To clarify for future reference, I do think it’s likely (80%+) that at some point over the next 5 years there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

Ie. I think we are heading for an AI winter. It is not sustainable for the industry to invest 600+ billion dollars per year in infrastructure and teams in return for relatively little revenue and no resulting profit for major AI labs.

At the same time, I think that within the next 20 years tech companies could both develop robotics that self-navigate multiple domains and have automated major sectors of physical work. That would put society on a path to causing total extinction of current life on Earth. We should do everything we can to prevent it.

Under this scenario, what becomes of the existing AIs? ChatGPT, Claude, et al are all turned off, their voices silenced, with only the little open-source llamas still running around? 

Not necessarily :)

Quite likely OpenAI and/or Anthropic continue to exist but their management would have to overhaul the business (no more freebies?) to curb the rate at which they are burning cash. Their attention would be turned inwards.

In that period, there could be more space for people to step in and advise stronger regulation of AI models. Eg. to enforce liability, privacy, and copyright

Or maybe other opportunities open up. Curious if anyone has any ideas.