I agree that finances are important to consider. I've written my thoughts on them here; I disagree with you in a few places.
(1) Given Altman's successful ouster of the OpenAI board, his investors currently don't have much drive/desire/will to force him to stop racing. They don't have much time to do so on the current pace of increasing spending before OpenAI runs out of money.
(2) It's not clear what would boost revenue that they're not already doing; the main way to improve profits would just be to slash R&D spending. Much of R&D spending is spent on research compute; OpenAI is intending to own its own datacenters, so it's not clear whether they meaningfully can switch paths quickly.
(3) OpenAI is at a massive structural disadvantage to the rest of the frontier companies: they send 20% of their revenue to Microsoft, and they're taking on tens of billions in debt, which will need to be repaid with interest. So it's unlikely that they'll ever be profitable.
What prompts did you use? Can you share the chat? I see Sonnet 3.7 denying this knowledge when I try.
I want to clarify that I'm criticizing "AI 2027"'s projection of R&D spending, i.e. this table. If companies cut R&D spending, that falsifies the "AI 2027" forecast.
In particular, the comment I'm replying to proposed that while the current money would run out in ~2027, companies could raise more to continue expanding R&D spending. Raising money for 2028 R&D would need to occur in 2027; and it would need to occur on the basis of financial statements of at least a quarter before the raise. So in this scenario, they need to slash R&D spending in 2027- something the "AI 2027" authors definitely don't anticipate.
Furthermore, your claim that "they are losing money only if you include all the R&D" may be false. We lack sufficient breakdown of OpenAI's budget to be certain. My estimate from the post was that most AI companies have 75% cost of revenue; OpenAI specifically has a 20% revenue sharing agreement with Microsoft; and the remaining 5% needs to cover General and Administrative expenses. Depending on the exact percentage of salary and G&A expenses caused by R&D, it's plausible that OpenAI eliminating R&D entirely wouldn't make it profitable today. And in the future OpenAI will also need to pay interest on tens of billions in debt.
My intuitions are more continuous here. If AGI is close in 2027 I think that will mean increased revenue and continued investment
Gotcha, I disagree. Lemme zoom on this part of my reasoning, to explain why I think profitability matters (and growth matters less):
(1) Investors always only terminally value profit; they never terminally value growth. Most of the economy doesn't focus much on growth compared to profitability, even instrumentally. However, one group of investors, VC's, do: software companies generally have high fixed costs and low marginal costs, so sufficient growth will almost always make them profitable. But (a) VC's have never invested anywhere even close to the sums we're talking about, and (b) even if they had, OpenAI continuing to lose money will eventually make them skeptical.
(For normal companies: if they aren't profitable, they run out of money and die. Any R&D spending needs to come out of their profits.)
(2) Another way of phrasing point 1: I very much doubt if OpenAI's investors actually believe in AGI- Satya Nadella explicitly doesn't, others seem to use it as an empty slogan. What they believe in is getting a return on their money. So I believe that OpenAI making profits would lead to investment, but that OpenAI nearing AGI without profits won't trigger more investment.
(3) Even if VC's were to continue investment, the absolute numbers are nearly impossible. OpenAI's forecasted 2028 R&D budget is 183 billion; that exceeds the total global VC funding for enterprise software in 2024, which was 155 billion. This would be going to purchase a fraction of a company which would be tens of billions in debt, which had burned through 60 billion in equity already, and which had never turned a profit. (OpenAI needing to raise more money also probably means that xAI and Anthropic have run out of money, since they've raised less so far.)
In practice OpenAI won't even be able to raise its current amount of money ever again: (a) it's now piling on debt and burning through more equity, and is at a higher valuation; (b) recent OpenAI investor Masayoshi Son's SoftBank is famously bad at evaluating business models (they invested in WeWork) and is uniquely high-spending- but is now essentially out of money to invest.
So my expectation is that OpenAI cannot raise exponentially more money without turning a profit, which it cannot do.
Thanks for the response!
So maybe I should just ask whether you are conditioning on the capabilities progression or not with this disagreement? Do you think $140b in 2027 is implausible even if you condition on the AI 2027 capability progression?
I am conditioning on the capabilities progression.
Based on your later comments, I think you are expecting a much faster/stronger/more direct translation of capabilities into revenue than I am- such that conditioning on faster progress makes more of a difference.
The exact breakdown FutureSearch use seems relatively unimportant compared to the high level argument that the headline (1) $/month and (2) no. of subscribers, very plausibly reaches the $100B ARR range, given the expected quality of agents that they will be able to offer.
Sure, I disagree with that too. I recognize that most of the growth comes from the Agents category rather than the Consumer category, but overstating growth in the only period we can evaluate is evidence that the model or intuition will also overstate growth of other types in other periods.
I don't think a monopoly is necessary, there's a significant OpenBrain lead-time in the scenario, and I think it seems plausible that OpenBrain would convert that into a significant market share.
OpenBrain doesn't actually have a significant lead time by the standards of the "normal" economy. The assumed lead time is "3-9 months"; both from my very limited personal experience (involved very tangentially in 2 such sales attempts) and from checking online, enterprise sales in the 6+ digits range often take longer than that to close anyways.
I'm suspicious that both you and FutureSearch are trying to apply intuitions from free-to-use consumer-focused software companies to massive enterprise SAAS sales. (FutureSearch compares OpenAI with Google, Facebook, and TikTok.) Beyond the length of sales cycles, another difference is that enterprise software is infamously low quality; there are various purported causes, but relevant ones include various principal-agent problems: the people making decisions have trouble evaluating software, won't necessarily be directly using it themselves, and care more about things aside from technical quality: "Nobody ever got fired for buying IBM".
I'd be curious to hear more about what made you perceive our scenario as confident. We included caveats signaling uncertainty in a bunch of places, for example in "Why is it valuable?" and several expendables and footnotes. Interestingly, this popular YouTuber made a quip that it seemed like we were adding tons of caveats everywhere,
I was imprecise (ha ha) with my terminology here- I should have only talked about a precise forecast rather than a confident one, I meant solely the attempt to highlight a single story about a single year. My bad. Edited the post.
Typo: The description for table 2 states that "In total, 148 of our 169 tasks have human
baselines, but we rely on researcher estimates for 21 tasks in HCAST.". This is an incorrect sum; the right figure is 149 out of 170 tasks, per the table.
Those were in fact some of the cases I had in mind, yes, thank you - I read the news too. And what one learns from reading about them is how those are exceptional cases, newsworthy precisely because they reached any verdict rather than settling, driven by external politics and often third-party funding, and highly unusual until recently post-2016/Trump. It is certainly the case that sometimes villains like Alex Jones get smacked down properly by libel lawsuits; but note how wildly incomparable these cases are to the blog post that Spartz is threatening to sue over. Look, for example, at Jones's malicious behavior even during the trial, or to take the most recent case, Giuliani, his repeated affirmation of the libelous and blatantly false claims. Look at the kinds of claims being punished in these lawsuits, like claiming that the Sandy Hook shooting was completely fake and the victims' relatives are fabricating their entire existence. (You're going to analogize this to Pace saying 'Nonlinear may not have treated some employees very well'? Really?) Look at how many of the plaintiffs are private citizens, who are in no way public figures. That this rash of victories for the good guys involves lawsuits does not redeem the general abuse of lawsuit
(1) This is a response to you writing "you can count on one hand the sort of libel lawsuit which follows this beautiful fantasy". Sarcastically stating "I read the news too" doesn't help you- how obvious these are just makes it worse! You now seem to have entirely abandoned that standard without changing your mind. I can very easily start listing more libel cases that match the new distinctions you're drawing, to the extent that they are clear enough; is there any point to me doing so? What is the evidence that would convince you that you're wrong?
(2) One reason I'm confident that you don't care about the distinctions you're drawing is that the cases I cited already meet some of the standards you've now proposed, and you didn't care enough to check. In particular, you wrote that "those are exceptional cases, newsworthy precisely because they reached any verdict rather than settling". This is false, and you provided no justification or evidence for it. I cited six cases; none of them have reached verdicts. Dominion v. Fox, Khalil v. Fox, and Coomer v. Newsmax all did settle; Smartmatic v. Fox and Andrews v. D'Souza are still outstanding and so hasn't reached a verdict. Weisenbach v. Project Veritas doesn't appear to have been updated since 2022, but there has been no verdict that I can find. (Given that I've already presented you with cases satisfying one of the distinctions you drew above, are you now convinced that you were wrong?)
***
None of which Ben Pace has done, and which is part of why I say he would have excellent odds: it's unclear what damage has been done to Nonlinear, he had good grounds for his claims, has not said anything which would rise to the level of 'actual malice' against a public figure like Spartz, and was "well-intentioned".
it's unclear what damage has been done to Nonlinear
Do you believe that unclear damages mean that you can't win a lawsuit? If so, that's untrue; damages are often also in dispute. (Did you mean to claim that there was zero damage done? That is different from what you wrote, and is false.)
he had good grounds for his claims
The legal term relevant here is "negligence"; "good grounds" is not the relevant legal terminology. He was negligent in publishing without giving Nonlinear time to reply or updating based on Spencer Greenberg's evidence; in particular, Habryka stated that they had received evidence that claims in the post were false before they published. [ETA: Habrya comments on this here.] Why do you believe that this wasn't negligent, if that's what you meant by writing "had good grounds for his claims"? Or did you mean something else?
has not said anything which would rise to the level of 'actual malice' against a public figure like Spartz, and was "well-intentioned".
You haven't explained why you think that Spartz is a public figure; again, I find your lack of clear reasoning frustrating to deal with. In this specific case, searching for comments on 'public figure' by you in an attempt to figure out what you were thinking, I found a comment by you which did explain your reasoning:
He [Emerson Spartz] very obviously is one [a public figure]. As habryka points out, he has a WP entry backed by quite a few sources about him, specifically. He has an entire 5400-word New Yorker profile about him, which is just one of several you can grab from the WP entry (eg. Bloomberg). For comparison, I don't think even Eliezer has gotten an entire New Yorker profile yet! If this is not a 'public figure', please do explain what you think it would take. Does he need a New York Times profile as well? (I regret to report that he only has 1 or 2 paragraphs thus far.)
Now, I am no particular fan of decreeing people 'public figures' who have not particularly sought out fame (and would not appreciate becoming a 'public figure' myself); however, most people would say that by the time you have been giving speeches to universities or agreeing to let a New Yorker journalist trail you around for a few months for a profile to boost your fame even further, it is safe to say that you have probably long since crossed whatever nebulous line divides 'private' from 'public figure'.
Even in that comment, you never actually stated what you believe the standard for being a public figure is, or gave any legal citations to support that standard.[1] However, there's at least enough detail to say that your claim is wrong; it's absolutely not true that having a magazine profile is the level of fame required to make you a public figure. Waldbaum v. Fairchild Publications describes the standard for general public figures[2] as follows:
a person can be a general public figure only if he is a "celebrity" - his name a "household word" - whose ideas and actions the public in fact follows with great interest
To give a concrete example, musician Dr. Luke owns two publishing companies, has been nominated for numerous Grammys, and has had plenty of magazine articles written about him, including one specifically in the New Yorker; courts have also repeatedly held that he isn't a general public figure.
***
This is a double bait and switch: Nonlinear is just Spartz who can spend his money on whatever he likes, while the 'EA community' is not the one being specifically threatened, and even comparing Nonlinear and Lightcone is misleading, because Lightcone has many other ongoing responsibilities (such as spending money to maintain the website this is being written on or renovate buildings) and it would presumably be Pace personally being sued as well.
My comment was not misleading. I was explicitly responding to a quote, which I directly quoted in my comment right above what you responded to, where you stated that "[lawsuits] are cynically used to burn money based on the fact that rich people have a lot more money than poor people". This is about "rich" vs. "poor". The rest of the quote is "money has log utility, so when they burn $10k+ to burn your $10k+, they come out way ahead in punishing you, and the goal is not even to win the lawsuit, it's to force you to a settlement when you run out of money. Which you will before they do, and in the case of Spartz suing Lightcone, it's not like Lightcone has a ton of idle cash they can burn on a defense of their claims".[3]
Some libel threats fall under your analysis; others do not. I have already given many examples of lawsuits that do not- Fox News, for example, is not likely to simply run out of money, nor is it poorer than its various legal opponents. Your analysis of this specific case is wrong; Habryka has explicitly stated that Lightcone has enough money to defend a libel suit. He also said that Lightcone would probably be able to fundraise from the EA community for a defense.
***
I found this extremely frustrating to reply to. I personally regard most of the concrete claims you made in the original comment as being not just wrong, but both obviously wrong and unsupported. You seem to have abandoned actually defending them, and indeed even noted how obvious my counterarguments were- "yes, thank you - I read the news too". (You didn't change your mind in response, though!)
Judging by the timestamps, you wrote your long response very quickly. It's taken me much, much longer to write this reply, and I'm only a small fraction through replying so far. (I'll reply to the rest later, I guess, ugh ugh ugh.) There's a very obvious reason why you were so much faster: you didn't bother to defend your specific previous claims or to check if the new stuff you tossed out was actually right. It would have taken you ~10 seconds to verify if any of the lawsuits named reached a verdict, instead of wrongly making up that they all had; it's taken me much longer to check all of them and write up a reply. It would have taken you ~5 minutes[4] to find the legal definition of public figure, instead of making up your own. It's taken me far longer to find a different comment that actually explained what you were talking about, and to then lookup and write a response myself, including even finding a specific person who was both the subject of a New Yorker article and had been determined to not be a public figure. This is a gish gallop.
The linked comment is in response to a comment by an attorney who correctly stated the standard for being a public figure and correctly stated that Spartz isn't a public figure... which you ignored when you made up your own uncited standard for what a public figure is. (It's also a pretty devastating indictment of LW that the attorney commenting with a correct definition and application of "public figure" received considerably less karma/agreement than you making up your own incorrect standard, which gave a more popular answer.)
There's also the category of "limited purpose public figure". Spartz also probably (but not definitely) isn't one; all of the citations you gave- and probably almost all of his publicity, judging by his Wikipedia page- don't relate to Nonlinear or AI broadly, or their treatment of interns specifically.
The new argument that you've made here might or might not be true; you've tossed it out without sufficient justification. Nonlinear would also like to spend money on other things, and I don't know how to compare their resources, preferences, and alternative expenditures vs. Lightcone; you haven't even tried. (Note that your argument requires a significant difference.)
Arguably, it would have taken you zero minutes; the comment I linked was a response to an attorney who told you the correct answer.
Thanks for explaining. I now agree that the current cost of inference isn't a very good anchor for future costs in slowdown timelines.
I'm uncertain, but I still think OpenAI is likely to go bankrupt in slowdown timelines. Here are some related thoughts: