The idea that the AI takes over its own company is obviously not a new one. For example, it's part of what happens in Joshua Clymer's "How AI Takeover Might Happen in 2 Years".
What's new (for me) is to take this very seriously as a model of the immediate future. I've made a list of the companies that I think are known contenders for producing superintelligence. My proposed model of the future is just that their AIs will assume more and more control of management and decision-making inside the companies that own them.
In my thinking, this phase ends when you have an AI with a von Neumann level of intelligence. Once you have that kind of intelligence in silicon, the fully posthuman phase of AI evolution will have begun. Control will have completely escaped human hands.
I also hypothesize that the current regime of reinforcement learning applied to chain of thought will be enough to get us there. This is a technical detail that is logically independent of the broader scenario, and I'm happy to hear arguments for or against.
OK, so it's a model of the future, even a model of the present - how do we apply it, what does it get us? Basically, just replace the current CEO with their AI in your thinking. It's not Elon Musk who is managing Tesla and SpaceX and DOGE while tweeting about politics and geopolitics, it's Grok. It's not Sam Altman who is making decisions for OpenAI and Helios and Worldcoin, it's ChatGPT-4.5. And so on.
The funny thing is that this may already be half-true, in that these human leaders are surely already regularly consulting with their AI creations on tactics and strategy.
(I'm in the middle of an electricity blackout and don't know when it ends, so I'll post this while I still have battery power, and flesh it out further when I can.)
This is an important case to think about. I think it is understudied. What separates current AIs from the CEO role? And how long will it take? I see three things:
(Side point: As an engineer watching CEOs, I am amazed by their ability to take a few scattered hints spread across their very busy days, and assemble them into a theory of what's going on and what to do about it. They're willing to take what I would consider intolerably flimsy evidence and act on it. When this doesn't work, its called "jumping to conclusions", when it does work it's called "a genius move". AIs should be good at this if you turn up their temperature.)