This:
It doesn’t just cost more to run OpenAI than it makes — it costs the company a billion dollars more than the entirety of its revenue to run the software it sells before any other costs. [...] OpenAI loses money on every single paying customer, just like with its free users. Increasing paid subscribers also, somehow, increases OpenAI's burn rate. This is not a real company.
Seems to contradict this:
The cost of [...] the compute from running models ($2 billion) [...] OpenAI makes most of its money from subscriptions (approximately $3 billion in 2024) and the rest on API access to its models (approximately $1 billion).
OpenAI is certainly still losing money overall and might lose even more money from compute costs in the future (if the reported expenses were reduced by them still having Microsoft's compute credits available). But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue. Even if you include the entirety of 700M they spend on salaries in the "running models" expenses, that would still leave them with about $1.3 billion in profit.
The article does note that ChatGPT Pro subscriptions specifically are losing the company money on net, but it sounds like the normal-tier subscriptions are profitable. Now the article claims that OpenAI spent $9 billion in total, but I could only find a mention of where $5.7 billion of that goes ($2B on running models, $3B on training models, $0.7B on salaries). If some of the missing $3.3 billion was also spent on running the normal product, that'd explain it, but I'm not sure where that money goes.
Well done finding the direct contradiction. (I also thought the claims seemed fishy but didn't think of checking whether model running costs are bigger than revenue from subscriptions.)
Two other themes in the article that seem in a bit of tension for me:
It feels like if people max out use on their subscriptions, then the models are providing some kind of value (promising to keep working on them even if just to make inference cheaper). By contrast, if people don't use them much, you should at least be able to make a profit on existing subscriptions (even if you might be worried about user retention and growth rates).
All of that said, I also get the impression "OpenAI is struggling." I just think it has more to do with their specific situation rather than with the industry (plus I'm not as confident in this take as the author seems to be).
Glad you spotted that! Those two quoted claims do contradict each other, as stated. I’m surprised I had not noticed that.
but I'm not sure where that money goes.
The Information had a useful table on OpenAI’s projected 2024 costs. Linking to a screenshot here.
But I'm not sure why the article says that "every single paying customer" only increases the company's burn rate given that they spend less money running the models than they get in revenue.
I’m not sure either why Ed Zitron wrote that. When I’m back on my laptop, I’ll look at older articles for any further reasoning.
Looking at the cost items in The Information’s table, revenue share with Microsoft ($700 million) and hosting ($400 million) definitely seem mostly variable with subscriptions. It’s harder to say for the sales & marketing ($300 million) and general administrative costs ($600 million).
Given that information, the revenue that OpenAI earns for itself would still be higher than just the cost of running the models and hosting (which we could call the “cost of running software”).
It’s hard to say though on the margin how much cost overall is added per normal-tier user added. Partly, it depends on how much more they use OpenAI’s tools than free users. But I guess you’d be more likely right than not that (if we exclude past training and research compute costs, and other fixed costs), that the overall revenue per normal-tier user added would be higher than the accompanying costs.
Now the article claims that OpenAI spent $9 billion in total
Note also that the $9 billion total cost amount seems understated in three ways:
- Deep Research has already been commoditized, with Perplexity and xAI launching their own versions almost immediately.
- Deep Research is also not a good product. As I covered last week, the quality of writing that you receive from a Deep Research report is terrible, rivaled only by the appalling quality of its citations, which include forum posts and Search Engine Optimized content instead of actual news sources. These reports are neither "deep" nor well researched, and cost OpenAI a great deal of money to deliver.
Good homework by Zitron on the numbers, and he's a really entertaining writer, but my (very brief) experience so far using it for work-related research more closely matches Sarah Constantin's assessment concluding that ChatGPT-4o DR was the best one she tested (including Perplexity, Gemini, ChatGPT-4o, Elicit, and PaperQA) on completeness, relevance, source quality, and creativity.
I was very disappointed with perplexity DR, it has the same name but it's definitely not the same product as OAI's DR.
Isn't it normal in startup world to make bets and not make money for many years? I am not familiar with the field so I don't have intuitions for how much money/how many years would make sense, so I don't know if OpenAI is doing something normal, or something wild.
Yes, though note that this is still concerning.
Normally the way this works in a startup is that spend exceeding revenue should be in service of bootstrapping the company. That means that money is usually spent in a few ways:
OpenAIs spend is concerning for the same reason, say, Uber and Netflix have/had concerning spend: they have to actually win their market to have a chance of reaping rewards, and if they don't they'll simply be forced to raise prices and cut quality/R&D.
From the full article:
- OpenAI's ChatGPT: 339 million monthly active users on the ChatGPT app, 246 million unique monthly visitors to ChatGPT.com.
- Microsoft Copilot: 11 million monthly active users on the Copilot app, 15.6 million unique monthly visitors to copilot.microsoft.com.
- Google Gemini: 18 million monthly active users on the Gemini app, 47.3 million unique monthly visitors.
- Anthropic's Claude: Two million (!) monthly active users on the Claude app, 8.2 million unique monthly visitors to claude.ai.
Wow. I knew that Claude is less used than ChatGPT, but given how many people in my social circles are Claude fans, I didn't expect it to be that much smaller. Guess it's mostly just the Very Online Nerds who know about it.
That difference is rather extreme. It seems LLM companies have a strong winner-take-all market tendency. Similar to Google (web search) or Amazon (online retail) in the past. It seems now much more likely to me that ChatGPT has basically already won the LLM race, similar to how Google won the search engine race in the past. Gemini outperforming ChatGPT in a few benchmarks likely won't make a difference.
Pure AI companies like OpenAI and Anthropic are like race cars which automatically catch on fire and explode the moment they fall too far behind.
Meanwhile AI companies like Google DeepMind and Meta AI are race cars which can lose the lead and still catch up later. They can maintain the large expenditures needed for AI training, without needing to generate revenue nor impress investors. DeepSeek and xAI might be in between.
(Then again, OpenAI is half owned by Microsoft. If it falls too far behind it might not go out of business but get folded into Microsoft, at a lower valuation. I still think they feel much more short term pressure.)
This reminds me a lot about what people said about Amazon near the peak of the dot-com bubble (and also about what people also said at the time of internet startups that actually failed).
Yes, the huge ramp up in investment by companies into deep learning infrastructure & products (since 2012) at billion dollar losses also reminds me of the dot-com bubble. With the exception that now not only small investment firms and individual investors are providing the money – big tech conglomerates are also diverting profits from their cash-cow businesses.
I can't speak with confidence about whether OpenAI is more like Amazon or other larger internet startups that failed. Right now though, OpenAI does not seem to have much of a moat.
This seems to explain a lot about why Altman is trying so hard both to make OpenAI for-profit (to more easily raise money with that burn rate) and why he wants so much bigger data centers (to keep going on "just make it bigger").
Read the full article here.
The journalist is an AI skeptic, but does solid financial investigations. Details below: