I agree that finances are important to consider. I've written my thoughts on them here; I disagree with you in a few places.
(1) Given Altman's successful ouster of the OpenAI board, his investors currently don't have much drive/desire/will to force him to stop racing. They don't have much time to do so on the current pace of increasing spending before OpenAI runs out of money.
(2) It's not clear what would boost revenue that they're not already doing; the main way to improve profits would just be to slash R&D spending. Much of R&D spending is spent on research compute; OpenAI is intending to own its own datacenters, so it's not clear whether they meaningfully can switch paths quickly.
(3) OpenAI is at a massive structural disadvantage to the rest of the frontier companies: they send 20% of their revenue to Microsoft, and they're taking on tens of billions in debt, which will need to be repaid with interest. So it's unlikely that they'll ever be profitable.
I read this with interest, but without much ability to think for myself about what's next. I am aware that enormous amounts of money circulate in the modern world, but it's out of my reach; my idea of how to raise money would be to open a Patreon account.
Nonetheless, what do we have to work with? We have the AI 2027 scenario. We have the trade war, which may yet evolve into a division of the world into currency zones. Vladimir Nesov is keeping track of how much compute is needed to keep scaling, how much is available, and how much it costs. Remmelt has been telling us to prepare for an AI crash, even before the tariffs. We should also remember that China is a player. It would be wacky if the American ability to keep scaling collapsed so completely that China was the only remaining player with the ability to reach superintelligence; or if both countries were hobbled by economic crisis; but that doesn't seem very likely. What seems more likely is that the risk of losing the AI race would be enough for both countries to use state financial resources, to keep going if private enterprise no longer had the means.
Your idea is that AI companies have the valuations they do, not because investors want to create world-transforming superintelligence per se, but because investors think these companies have the potential to become profitable tech giants like Google, Facebook, or Microsoft; and if money gets tight, investors will demand that they start turning a profit, which means they'll have to focus on making products rather than on scaling and pure research, which will slow down the timeline to superintelligence.
It makes sense as a scenario. But I find it interesting that (in the opinion of many), one of the tech giants recently got to the front of the race - I'm talking about Google with Gemini 2.5. Or at least, it is sharing the lead, now that OpenAI has released o3, which seems to have roughly similar capabilities. This seems to undermine the dichotomy between frontier AI companies forging ahead on VC money, and tech giants offering products and services that actually turn a profit, since it reminds us that frontier AI work can prosper, even inside the tech giants.
If there is a scaling winter brought on by a bear market, it may be that the model of frontier AI companies living on VC money dies, and that frontier AI survives only within profitable tech giants, or with state backing. In a comment to Remmelt I suggested that Google and xAI have enough money to survive on their own terms, and OpenAI and Anthropic have potential big brothers in the form of Microsoft and Amazon respectively. China has a similar division between big old Internet companies and "AI 2.0" startups that they invest in, so an analogous shakeup there is conceivable.
It occurs to me that if there is an AI slowdown because all the frontier AI startups have to submit themselves to profit-making Internet giants, it will also give the advocates of an AI pause a moment to reenter the scene and push for e.g. an American-Chinese agreement similar to the slow timeline in "AI 2027". American and Chinese agreement on anything might seem far away now, but things can change quickly, especially if the dust settles from the trade war and both countries have arrived at a new economic strategy and equilibrium.
I still feel like such changes don't affect the trajectory much; no matter what the economic and political circumstances, a world that had o3-level AI in it is only a few more steps away from superintelligence, it seems to me (and getting there by further brute scaling is just the dumbest way to do it, I'm sure there are enormous untapped latent capabilities within the hardware and software that we already have). But it's good to be able to think about the nuances of the situation, so thanks for your contribution.
This is in part a response to AI 2027, which I think is rather vague and somewhat unrealistic in projecting the financial aspects of the AI future. As a jumping off point, AI 2027 projects that a year from now, OpenAI’s valuation will have roughly quadrupled to $1 trillion and data center investment will have reached a half trillion dollars a year (about 10% of total US private investment), despite a piddling $26 billion in revenue.
This is a possible future — perhaps, so long as their is technological progress, investors will continue pouring money into AI in the pursuit of a speculative payday when AI is reached. But, I think Wall Street is currently (implicitly) forecasting a different future and that this will have an important influence on events.
Wall Street valuations right now suggest a future in which AI is a valuable technology, but no more “transformative” than the search engine.
Exhibit A here is everyone’s favorite AI trade — Nvidia. Nvidia has a $2.5 trillion market cap, which is heavily tied to AI-driven demand for the GPUs it designs but does not manufacture. Nvidia’s high valuation represents two bets — that AI will keep driving that demand but also that AI will not itself take over chip design. In a world with AGI (or certainly ASI), Nvidia is worthless. Even super-intelligent AI may well need TSMC or someone else to fabricate its chips, but it clearly does not need the engineers at Nvidia to design them.
The relative valuations of AI companies also likely reflect this dynamic. OpenAI trades at a substantial premium — with about four times the valuation of Anthropic. Perhaps this is because OpenAI is four times more likely than Anthropic to deliver AGI, but there is no obvious reason to make that bet. This is especially true if the route to AGI is to first developing a superhuman coder, given that Claude seems to lead the AI coding pack.
Instead, the easiest way to understand OpenAI’s relative valuation is based on its obvious strengths in branding, name recognition, and consumer adoption. OpenAI has by far the strongest brand in the space, to the point that the ChatGPT brand has become genericized. If the future path to profitability lies in providing a product to consumers whether on a subscription or an advertising model, then OpenAI has a very clear edge that justifies the valuation.
A simple theory explaining the valuation of OpenAI ($260 billion), Anthropic ($61.5 billion) and Ilya Sustkever’s Safe Superintelligence — which promises that its first and only product will be superintelligence ($32 billion) is that all three companies have a similar shot at achieving transformative AGI/ASI, which would presumably be worth tens or hundreds of trillions of dollars. But, they have very dissimilar chances of becoming the next Alphabet or Meta, and so the valuation on OpenAI is mostly a bet only a 1 in 4 chance that it can turn its current gaudy user numbers into profits in line with Facebook and Instagram. If OpenAI does not eventually move in that direction, it is likely to face an investor revolt.
2. A bear market will increase pressure on AI companies to generate profits
For now, investors are willing to tolerate AI companies burning money. This is likely to change both as the amounts of money become large and the next time the US hits a significant recession or bear market. This may be just around the corner with tariffs. Even if it is not, it will happen sooner or later.
If investors begin to focus on profits and revenue, this will inevitably force the companies to invest less into racing towards AGI and more into building products that make money. As talent and resources reorient in that direction, progress will slow.
Building a superhuman AI researcher is not a profitable activity (it only pays off via ultimate AGI) and is likely to be an especially disfavored approach in such an environment. The money to be paid will be in automating various white collar functions, like customer service. And this requires designing agents that are good at a very different set of skills than AI research.
3. An alternative forecast
An alternative to AI 2027, then, is a future in which over the next 12-18 months, AI companies are forced to pivot towards actually making money.
There seem to be two routes available for that:
Either route may well eventually generate the cash to fund the data centers needed to finally push to AGI but if that cash has to come from organic growth rather than VCs, it will inevitably be much slower.