We estimate that, as of June 12, 2024, OpenAI has an annualized revenue (ARR) of:

 $1.9B for ChatGPT Plus (7.7M global subscribers),
 $714M from ChatGPT Enterprise (1.2M seats),
 $510M from the API, and
 $290M from ChatGPT Team (from 980k seats)

(Full report in app.futuresearch.ai/reports/3Li1, methods described in futuresearch.ai/openai-revenue-report.)

We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities.

Our methods in brief: we searched exhaustively for all public information on OpenAI's finances, and filtered it to reliable data points. From this, we selected a method of calculation that required the minimal amount of inference of missing information.

To infer the missing information, we used the standard techniques of forecasters: fermi estimates, and base rates / analogies.

We're fairly confident that the true values are relatively close to what we report. We're still working on methods to assign confidence intervals on the final answers given the confidence intervals of all of the intermediate variables.

Inside the full report, you can see which of our estimates are most speculative, e.g. using the ratio of Enterprise seats to Teams seats from comparable apps; or inferring the US to non-US subscriber base across platforms from numbers about mobile subscribers, or inferring growth rates from just a few data points.

Overall, these numbers imply to us that:

  • Sam Altman's surprising claim of $3.4B ARR on June 12 seems quite plausible, despite skepticism people raised at the time.
  • Apps (consumer and enterprise) are much more important to OpenAI than the API.
  • Consumers are much more important to OpenAI than enterprises, as reflected in all their recent demos, but the enterprise growth rate is so high that this may change abruptly.
     

58

1
0
1

Reactions

1
0
1
Comments8


Sorted by Click to highlight new comments since:

We looked into OpenAI's revenue because financial information should be a strong indicator of the business decisions they make in the coming months, and hence an indicator of their research priorities
 


Is this really true? I am quite surprised by this given how much of the expected financial value  of OpenAI (and valuation of AI companies more generally) is not in the next couple of months, but based on being at the frontier of a technology with enormous future potential.

 

Definitely. I think all contribute to their thinking - their current finances, the growth rates, and the expected value of their future plans that don't generate any revenue today.

We estimate that

Point of clarification, it seems like FutureSearch is largely powered by calls to AI models. When you say "we", what do you mean? Has a human checked the entire reasoning process that led to the results you present here? 

There were humans in the loop, yes.

Hi, thanks for this! Any idea how this compares to total costs?

Hi! We currently don't have a reliable estimate of the cost, but we might include it in the future.

I didn't check whether you addressed this, but an article from The Information claims that OpenAI's API ARR reached $1B as of March: https://www.theinformation.com/articles/a-peek-behind-openais-financials-dont-underestimate-china?rc=qcqkcj

A separate The Information article claims that OpenAI receives $200MM ARR as a cut of MSFT's OpenAI model-based cloud revenue, which I'm not sure is included in your breakdown: https://www.theinformation.com/articles/openais-annualized-revenue-doubles-to-3-4-billion-since-late-2023?rc=qcqkcj

These articles are not public though - they are behind a paywall.

The source for the $1B API revenue claim is given as "someone who viewed internal figures related to the business".

It's not completely implausible, but the implications for OpenAI's revenue growth curve would be a little surprising. 

We have fairly reliable numbers for ChatGPT Enterprise revenue (based on an official announcement of seats sold together with the price per seat quoted to someone who inquired) and ChatGPT Plus revenue (from e-receipt data) from the start of April; these sum to about $1.9B. It's reasonable to add another $300M to this to account for other smaller sources – early ChatGPT Team revenue, Azure (which we did indeed ignore), custom models.

So, with an extra $1B from the API on top of all that, we'd see only $200M revenue growth between the start of April and the middle of June, when it was announced as $3.4B – contrast with $1.2B between the start of January (December's ARR was $2B) and March (estimated $3.2B).

Curated and popular this week
 ·  · 9m read
 · 
Summary * My sense is some EAs act like/hope they will be assigned the perfect impactful career by some combination of 80,000 Hours recommendations (and similar) and ‘perceived consensus views in EA’. * But, your life is full of specific factors, many impactful jobs haven’t yet been spotted by other EAs and career advice is importantly iterative. * Instead of simply deferring, I recommend a combination of: * Your own hard work figuring out your path to impact. * (Still) Integrating expert advice. * Support from the community, and close connections who know your context. Thank you for the thoughtful feedback from Alex Rahl-Kaplan, Alix Pham, Caitlin Borke, Claude, Matt Reardon, and Michelle Hutchinson for making this post better. Claude also kindly offered to take the blame for all the mistakes I might have made. Introduction Question: How do you figure out how to do the most good with your career? Answer: Find an EA sorting hat[1]. Place it on your head. Let it read your mind. Listen carefully as it assigns you your chosen career track, or better yet, a specific role at a specific organisation. Well done. Work hard at your designated job and watch as the impact rolls in. Wrinkle: There is no EA sorting hat. There is no omniscient individual or entity that will hear about your degree, your skills, your preferences, and your context, and spit out an ideal career. Obviously this is a caricature. But I think there’s some truth to it. I think it looks like buying into the ideas of EA, and then hoping or expecting that the community or some organisation in it can match you with the right career path or job role. Maybe you see some of yourself in this pattern of thinking. Or you might know someone who is making this mistake. I see it happening too much in the EA community, especially among those newer to EA, and earlier in their career. This is a mistake that 80,000 Hours regularly tries to push back against. But their products are just too good so peopl
 ·  · 16m read
 · 
And the need for more AIS advocacy work Executive Summary The Center for AI Policy (CAIP) is no more. CAIP was an advocacy organization that worked to raise policymakers’ awareness of the catastrophic risks from AI and to promote ambitious legislative solutions. Such advocacy is necessary because good governance ideas don’t spread on their own, and to meaningfully reduce AI risk, they must reach the U.S. federal government. Why did CAIP shut down? The reasons are mixed. Some were internal, such as hiring missteps. But others reflect the broader ecosystem: funders setting the bar for advocacy projects at an unreasonably high level, structural biases in the funding space that privilege research over advocacy. While CAIP’s mistakes played a role, a full account also needs to reckon with these systemic factors. I focus on CAIP, as I think it filled a particular niche and was impactful, but there are many other advocacy orgs doing great work (see A5), and the core argument is that we need more of that work. Looking forward, impactful advocacy projects will likely continue to compete for a far more limited pool of funds than research efforts. That makes individual support a particularly high-leverage opportunity, and for those concerned with AI risk I’d seriously consider donating to AI safety (AIS) advocacy. The space would also greatly benefit from a CAIP 2.0 (an AIS advocacy organization willing to speak frankly about catastrophic risks) as well as an organization focused on developing advocacy talent. Some brief notes: * For those not as interested in the CAIP bit, feel free to jump to the “Funders Have Set the Bar too High” section and read from there. * Our executive director Jason has already written extensively about much of this in his sequence, which I aim to partially summarize here as I also make my own case for the need for advocacy. My opinions are shared in a personal capacity. * My deepest gratitude to all of those who spent time reviewing and cha
 ·  · 9m read
 · 
I listened to "If Anyone Builds It, Everyone Dies" today. I think the first two parts of the book are the best available explanation of the basic case for AI misalignment risk for a general audience. I thought the last part was pretty bad, and probably recommend skipping it. Even though the authors fail to address counterarguments that I think are crucial, and as a result I am not persuaded of the book’s thesis and think the book neglects to discuss crucial aspects of the situation and makes poor recommendations, I would happily recommend the book to a lay audience and I hope that more people read it. I can't give an overall assessment of how well this book will achieve its goals. The point of the book is to be well-received by people who don't know much about AI, and I’m not very good at predicting how laypeople will respond to it; seems like results so far are mixed, and reviews from people who are familiar with AI risk are fairly negative. So I’ll just talk about whether I think the arguments in the book are reasonable enough that I want them to be persuasive to the target audience, rather than whether I think they’ll actually succeed. Thanks to several people for helpful and quick comments and discussion, especially Oli Habryka and Malo Bourgon! Synopsis Here's a synopsis and some brief thoughts, part-by-part: * In part 1, they explain what neural nets are and why you might expect powerful AIs to be misaligned. I thought it was very good. I think it's a reasonable explanation of basic ML and an IMO excellent exploration of what the evolution analogy suggests about AI goals (though I think that there are a bunch of disanalogies that the authors don’t discuss, and I imagine I’d dislike their discussion of that if they did write it). I agreed with most of this section. * I thought the exploration of the evolution analogy was great – very clearly stated and thoughtful. I don’t remember if I’ve previously read other versions of this argument that also made
Relevant opportunities