Thanks for explaining. I now agree that the current cost of inference isn't a very good anchor for future costs in slowdown timelines.
I'm uncertain, but I still think OpenAI is likely to go bankrupt in slowdown timelines. Here are some related thoughts:
I agree that finances are important to consider. I've written my thoughts on them here; I disagree with you in a few places.
(1) Given Altman's successful ouster of the OpenAI board, his investors currently don't have much drive/desire/will to force him to stop racing. They don't have much time to do so on the current pace of increasing spending before OpenAI runs out of money.
(2) It's not clear what would boost revenue that they're not already doing; the main way to improve profits would just be to slash R&D spending. Much of R&D spending is spent on research compute; OpenAI is intending to own its own datacenters, so it's not clear whether they meaningfully can switch paths quickly.
(3) OpenAI is at a massive structural disadvantage to the rest of the frontier companies: they send 20% of their revenue to Microsoft, and they're taking on tens of billions in debt, which will need to be repaid with interest. So it's unlikely that they'll ever be profitable.
What prompts did you use? Can you share the chat? I see Sonnet 3.7 denying this knowledge when I try.
I want to clarify that I'm criticizing "AI 2027"'s projection of R&D spending, i.e. this table. If companies cut R&D spending, that falsifies the "AI 2027" forecast.
In particular, the comment I'm replying to proposed that while the current money would run out in ~2027, companies could raise more to continue expanding R&D spending. Raising money for 2028 R&D would need to occur in 2027; and it would need to occur on the basis of financial statements of at least a quarter before the raise. So in this scenario, they need to slash R&D spending in 2027- something the "AI 2027" authors definitely don't anticipate.
Furthermore, your claim that "they are losing money only if you include all the R&D" may be false. We lack sufficient breakdown of OpenAI's budget to be certain. My estimate from the post was that most AI companies have 75% cost of revenue; OpenAI specifically has a 20% revenue sharing agreement with Microsoft; and the remaining 5% needs to cover General and Administrative expenses. Depending on the exact percentage of salary and G&A expenses caused by R&D, it's plausible that OpenAI eliminating R&D entirely wouldn't make it profitable today. And in the future OpenAI will also need to pay interest on tens of billions in debt.
My intuitions are more continuous here. If AGI is close in 2027 I think that will mean increased revenue and continued investment
Gotcha, I disagree. Lemme zoom on this part of my reasoning, to explain why I think profitability matters (and growth matters less):
(1) Investors always only terminally value profit; they never terminally value growth. Most of the economy doesn't focus much on growth compared to profitability, even instrumentally. However, one group of investors, VC's, do: software companies generally have high fixed costs and low marginal costs, so sufficient growth will almost always make them profitable. But (a) VC's have never invested anywhere even close to the sums we're talking about, and (b) even if they had, OpenAI continuing to lose money will eventually make them skeptical.
(For normal companies: if they aren't profitable, they run out of money and die. Any R&D spending needs to come out of their profits.)
(2) Another way of phrasing point 1: I very much doubt if OpenAI's investors actually believe in AGI- Satya Nadella explicitly doesn't, others seem to use it as an empty slogan. What they believe in is getting a return on their money. So I believe that OpenAI making profits would lead to investment, but that OpenAI nearing AGI without profits won't trigger more investment.
(3) Even if VC's were to continue investment, the absolute numbers are nearly impossible. OpenAI's forecasted 2028 R&D budget is 183 billion; that exceeds the total global VC funding for enterprise software in 2024, which was 155 billion. This would be going to purchase a fraction of a company which would be tens of billions in debt, which had burned through 60 billion in equity already, and which had never turned a profit. (OpenAI needing to raise more money also probably means that xAI and Anthropic have run out of money, since they've raised less so far.)
In practice OpenAI won't even be able to raise its current amount of money ever again: (a) it's now piling on debt and burning through more equity, and is at a higher valuation; (b) recent OpenAI investor Masayoshi Son's SoftBank is famously bad at evaluating business models (they invested in WeWork) and is uniquely high-spending- but is now essentially out of money to invest.
So my expectation is that OpenAI cannot raise exponentially more money without turning a profit, which it cannot do.
Thanks for the response!
So maybe I should just ask whether you are conditioning on the capabilities progression or not with this disagreement? Do you think $140b in 2027 is implausible even if you condition on the AI 2027 capability progression?
I am conditioning on the capabilities progression.
Based on your later comments, I think you are expecting a much faster/stronger/more direct translation of capabilities into revenue than I am- such that conditioning on faster progress makes more of a difference.
The exact breakdown FutureSearch use seems relatively unimportant compared to the high level argument that the headline (1) $/month and (2) no. of subscribers, very plausibly reaches the $100B ARR range, given the expected quality of agents that they will be able to offer.
Sure, I disagree with that too. I recognize that most of the growth comes from the Agents category rather than the Consumer category, but overstating growth in the only period we can evaluate is evidence that the model or intuition will also overstate growth of other types in other periods.
I don't think a monopoly is necessary, there's a significant OpenBrain lead-time in the scenario, and I think it seems plausible that OpenBrain would convert that into a significant market share.
OpenBrain doesn't actually have a significant lead time by the standards of the "normal" economy. The assumed lead time is "3-9 months"; both from my very limited personal experience (involved very tangentially in 2 such sales attempts) and from checking online, enterprise sales in the 6+ digits range often take longer than that to close anyways.
I'm suspicious that both you and FutureSearch are trying to apply intuitions from free-to-use consumer-focused software companies to massive enterprise SAAS sales. (FutureSearch compares OpenAI with Google, Facebook, and TikTok.) Beyond the length of sales cycles, another difference is that enterprise software is infamously low quality; there are various purported causes, but relevant ones include various principal-agent problems: the people making decisions have trouble evaluating software, won't necessarily be directly using it themselves, and care more about things aside from technical quality: "Nobody ever got fired for buying IBM".
I'd be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in "Why is it valuable?" and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,
I was imprecise (ha ha) with my terminology here- I should have only talked about a precise forecast rather than a confident one, I meant solely the attempt to highlight a single story about a single year. My bad. Edited the post.
Typo: The description for table 2 states that "In total, 148 of our 169 tasks have human
baselines, but we rely on researcher estimates for 21 tasks in HCAST.". This is an incorrect sum; the right figure is 149 out of 170 tasks, per the table.
You seem to be assuming that there's not significant overhead or delays from negotiating leases, entering bankruptcy, or dealing with specialized hardware, which is very plausibly false.
If nobody is buying new datacenter GPU's, that will cut GPU progress to ~zero or negative (because production is halted and implicit knowledge is lost). (It will also probably damage broader semiconductor progress.)
This reduces the cost to rent a GPU-hour, but it doesn't reduce the cost to the owner. (OpenAI, and every frontier lab but Anthropic, will own much or all[1] of their own compute. So this doesn't do much to help OpenAI in particular.)
I think you have a misconception about accounting. GPU depreciation is considered on an income statement, it is part of the operating expenses, subtracted from gross profit to get net profit. Depreciation due to obsolescence vs. breakdowns isn't treated differently. If OpenAI drops its prices below the level needed to pay for that depreciation, they won't be running a (net) profit. Since they won't be buying new GPU's, they will die in a few years, once their existing stock of GPU's breaks down or becomes obsolete. To phrase it another way, if you reduce GPU-time prices 3-5x, the global AI compute buildout has not in fact paid for itself.
OpenAI has deals with CoreWeave and Azure; they may specify fixed prices; even if not, CoreWeave's independence doesn't matter here, as they also need to make enough money to buy new GPU's/repay debt. (Azure is less predictable.)