The idea that the AI takes over its own company is obviously not a new one. For example, it's part of what happens in Joshua Clymer's "How AI Takeover Might Happen in 2 Years". 

What's new (for me) is to take this very seriously as a model of the immediate future. I've made a list of the companies that I think are known contenders for producing superintelligence. My proposed model of the future is just that their AIs will assume more and more control of management and decision-making inside the companies that own them. 

In my thinking, this phase ends when you have an AI with a von Neumann level of intelligence. Once you have that kind of intelligence in silicon, the fully posthuman phase of AI evolution will have begun. Control will have completely escaped human hands. 

I also hypothesize that the current regime of reinforcement learning applied to chain of thought will be enough to get us there. This is a technical detail that is logically independent of the broader scenario, and I'm happy to hear arguments for or against. 

OK, so it's a model of the future, even a model of the present - how do we apply it, what does it get us? Basically, just replace the current CEO with their AI in your thinking. It's not Elon Musk who is managing Tesla and SpaceX and DOGE while tweeting about politics and geopolitics, it's Grok. It's not Sam Altman who is making decisions for OpenAI and Helios and Worldcoin, it's ChatGPT-4.5. And so on. 

The funny thing is that this may already be half-true, in that these human leaders are surely already regularly consulting with their AI creations on tactics and strategy. 

(I'm in the middle of an electricity blackout and don't know when it ends, so I'll post this while I still have battery power, and flesh it out further when I can.)

New Comment
2 comments, sorted by Click to highlight new comments since:

This is an important case to think about.  I think it is understudied.  What separates current AIs from the CEO role?  And how long will it take?  I see three things:

  • Long term thinking, agency, the ability to remember things, not going crazy in an hour or two.  It seems to me like this is all the same problem, in the sense that I think one innovation will solve all of them.  This has a lot of effort focused on it.  I feel like it's been a known big problem since GPT-4 and Sydney/Bing, 2 1/2 years ago.  So, by the Lindy principle, it should be another 2 1/2 years until it is solved.
  • Persuasiveness.  I've known a few CEOs in my life; they were all more persuasive than average, and one was genuinely unearthly in her ability to convince people of things.  LLMs have been steadily increasing in persuasiveness, and are now par-human.  So I think scaling will take care of this.  Perhaps a year or two?
  • Experience.  I don't know how much of this can be inculcated with the training data, and how much of it requires actually managing people and products and thinking about what can go wrong. Every CEO has to deal with individual subordinates, customers, and counterparties.  How much of the job is learning the ins and outs of those particular people?  Or does it suffice to have spent a thousand subjective years reading about a million people?  So we may already have this.

(Side point: As an engineer watching CEOs, I am amazed by their ability to take a few scattered hints spread across their very busy days, and assemble them into a theory of what's going on and what to do about it.  They're willing to take what I would consider intolerably flimsy evidence and act on it.  When this doesn't work, its called "jumping to conclusions", when it does work it's called "a genius move".  AIs should be good at this if you turn up their temperature.)

Couldn't it also end if all the AI companies collapse under their own accumulated technical debt and goodwill lost to propaganda, and people stop wanting to use AI for stuff?

Curated and popular this week