If there is a fast take-off, or corporations start earning billions on large models in the next years, we’ll get locked into a trajectory toward extinction.

Now, I don’t think either will happen. I think the AI market will crash in the next few years. But my credence here is beside the point.

My point is that the period after an AI crash would be super high leverage to finally get robust enforceable restrictions in place. Besides some ‘warning shot’ where a badly designed AI system happens to cause or come close to causing a catastrophe, I can’t think of a better window of opportunity.

So even if you think it is highly unlikely, preparing for the possibility is worth doing.

As funding dries up and the media turns against AI corporations, their executives get distracted and lose political sway. For a short period, AI Safety and other concerned communities can pack some real punch.

It is the time we can start enforcing all the laws on the books (and put more on the books!) to prevent corporations from recklessly developing and releasing AI models.

Compute limits! Anti-data-scraping! Worker protections! Product liability!

It will be harder to put in place regulations against risks of AGI specifically, because the public will have turned skeptical that AGI could be a thing.

But that’s okay. Enough people are sick of AI corporations and just want to restrict the heck out of them. Environmentalists, the creative industry, exploited workers and whistleblowers, experts fighting deepfakes and disinformation – each has a bone to pick.

There are plenty of robust restrictions we can build consensus around that will make it hard for multi-domain-processing models to get commercially developed.

I’m preparing for this moment.

If you are a funder, keep the possibility of an AI crash in mind. When the time comes, talk with me. Happy to share the information and funding leads I have.

New Comment
3 comments, sorted by Click to highlight new comments since:

For more details on (the business side of) a potential AI crash, see recent articles by the blog Where's Your Ed At, which wrote the sorta-well-known post "The Man Who Killed Google Search".

For his AI-crash posts, start here and here and click on links to his other posts. Sadly, the author falls into the trap of "LLMs will never get to reasoning because they don't, like, know stuff, man", but luckily his core competencies (the business side, analyzing reporting) show why an AI crash could still very much happen.

EDIT: Due to the incoming administration's ties to tech investors, I no longer think an AI crash is so likely. Several signs IMHO point to "they're gonna go all-in on racing for AI, regardless of how 'needed' it actually is".

I'm also feeling less "optimistic" about an AI crash given:

  1. The election result involving a bunch of tech investors and execs pushing for influence through Trump's campaign (with a stated intention to deregulate tech).
  2. A military veteran saying that the military could be holding up the AI industry like "Atlas holding the globe", and an AI PhD saying that hyperscaled data centers, deep learning, etc, could be super useful for war.

I will revise my previous forecast back to 80%+ chance.