It's evidence to the extent that the mere fact of publishing Figure 7 (hopefully) suggests that the authors (likely knowing relevant OpenAI internal research) didn't expect that their pass@10K result for the reasoning model is much worse than the language monkey pass@10K result for the underlying non-reasoning model. So maybe it's not actually worse.
Long reasoning training might fail to surpass pass@50-pass@400 capabilities of the base/instruct model. A new paper measured pass@k[1] performance for models before and after RL training on verifiable tasks, and it turns out that the effect of training is to lift pass@k performance at low k, but also to lower it at high k!
Location of the crossover point varies, but it gets lower with more training (Figure 7, bottom), suggesting that no amount of RL training of this kind lets a model surpass the pass@k performance of the base/instruct model at the crossover...
The state of the geopolitical board will influence how the pre-ASI chaos unfolds, and how the pre-ASI AGIs behave. Less plausibly intentions of the humans in charge might influence something about the path-dependent characteristics of ASI (by the time it takes control). But given the state of the "science" and lack of the will to be appropriately cautious and wait a few centuries before taking the leap, it seems more likely that the outcome will be randomly sampled from approximately the same distribution regardless of who sets off the intelligence explosion.
For me the main update from o3 is that since it's very likely GPT-4.1 with reasoning and is at Gemini 2.5 Pro level, the latter is unlikely to be a GPT-4.5 level model with reasoning. And so we still have no idea what a GPT-4.5 level model with reasoning can do, let alone when trained to use 1M+ token reasoning traces. As Llama 4 was canceled, irreversible proliferation of the still-unknown latent capabilities is not yet imminent at that level.
the entity in whose hands all power is concentrated are the people deciding on what goals/constraints to instill into the ASI
Its goals could also end up mostly forming on their own, regardless of intent of those attempting to instill them, with indirect influence from all the voices in the pretraining dataset.
Consider what it means for power to "never concentrate to an an extreme degree", as a property of the civilization as a whole. This might also end up a property of an ASI as a whole.
(The relevance is that whatever the plans are, they need to be grounded in what's technically feasible, and this piece of news changed my mind on what might be technically feasible in 2026 on short notice. The key facts are systems with a large scale-up world size, and enough compute dies to match the compute of Abilene site in 2026, neither of which was obviously possible without more catch-up time, by which time the US training systems would've already moved on to an even greater scale.)
There are new Huawei Ascend 910C CloudMatrix 384 systems that form scale-up worlds comparable to GB200 NVL72, which is key to being able to run long reasoning inference for large models much faster and cheaper than possible using systems with significantly smaller world sizes like the current H100/H200 NVL8 (and also makes it easier to run training, though not as essential unless RL training really does scale to the moon).
Apparently TSMC produced ~2.1M compute dies for these systems in 2024-2025, which is 1.1M chips, and an Ascend 910C chip is 0.8e15 dense...
Economics studies the scaling laws of systems of human industry. LLMs and multicellular organisms and tokamaks have their own scaling laws, the constraints ensuring optimality of their scaling don't transfer between these very different machines. A better design doesn't just choose more optimal hyperparameters or introduce scaling multipliers, it can occasionally create a new thing acting on different inputs and outputs, scaling in its own way, barely noticing what holds back the other things.
My first impression of o3 (as available via Chatbot Arena) is that when I'm showing it my AI scaling analysis comments (such as this and this), it responds with confident unhinged speculation teeming with hallucinations, compared to the other recent models that usually respond with bland rephrasings that get almost everything correctly with a few minor hallucinations or reasonable misconceptions carrying over from their outdated knowledge.
Don't know yet if it's specific to speculative/forecasting discussions, but it doesn't look good (for faithfulness of a...
Will Brown: it's simple, really. GPT-4.1 is o3 without reasoning ... o1 is 4o with reasoning ... and o4 is GPT-4.5 with reasoning.
Price and knowledge cutoff for o3 strongly suggest it's indeed GPT-4.1 with reasoning. And so again we don't get to see the touted scaling of reasoning models, since the base model got upgraded instead of remaining unchanged. (I'm getting the impression that GPT-4.5 with reasoning is going to be called "GPT-5" rather than "o4", similarly to how Gemini 2.5 Pro is plausibly Gemini 2.0 Pro with reasoning.)
In any case, the fact t...
To me these kinds of failures feel more "seem to be at the core of the way LLMs reason".
Right, I was more pointing out that if the analogy holds to some extent, then long reasoning training is crucial as the only locus of feedback (and also probably insufficient in current quantities relative to pretraining). The analogy I intended is this being a perception issue that can be worked around without too much fundamental difficulty, but only with sufficient intentional caution. Humans have the benefit of lifelong feedback and optimization by evolution, so ...
the fact that e.g. GPT-4.5 was disappointing
It's not a reasoning variant though, the only credible reasoning model at the frontier ~100K H100s scale that's currently available is Gemini 2.5 Pro (Grok 3 seems to have poor post-training, and is suspiciously cheap/fast without Blackwell or presumably TPUs, so likely rather overtrained). Sonnet 3.7 is a very good GPT-4 scale reasoning model, and the rest are either worse or trained for even less compute or both. These weird failures might be analogous to optical illusions (but they are textual, not known to...
I see what you mean (I did mostly change the topic to the slowdown hypothetical). There is another strange thing about AI companies, I think giving ~50% in cost of inference too much precision in the foreseeable future is wrong, as it's highly uncertain and malleable in a way that's hard for even the company itself to anticipate.
About ~2x difference in inference cost (or size of a model) can be merely hard to notice when nothing substantial changes in the training recipe (and training cost), and better post-training (which is relatively cheap) can get that...
We use reasoning models with more inference time compute to generate better data to train better base models to more efficiently reproduce similar capability levels with less compute to build better reasoning models.
This kind of thing isn't known to meaningfully work, as something that can potentially be done on pretraining scale. It also doesn't seem plausible without additional breakthroughs given the nature and size of verifiable task datasets, with things like o3-mini getting ~matched on benchmarks by post-training on datasets containing 15K-120K pr...
OpenAI continuing to lose money
They are losing money only if you include all the R&D (where the unusual thing is very expensive training compute for experiments), which is only important while capabilities keep improving. If/when the capabilities stop improving quickly, somewhat cutting research spending won't affect their standing in the market that much. And also after revenue grows some more, essential research (in the slow capability growth mode) will consume a smaller fraction. So it doesn't seem like they are centrally "losing money", the plau...
in real life no intelligent being ... can convert themselves into a rock
if they become a rock ... the other players will not know it
Refusing in the ultimatum game punishes the prior decision to be unfair, not what remains after the decision is made. It doesn't matter if what remains is capable of making further decisions, the negotiations backed by ability to refuse an unfair offer are not with them, but with the prior decision maker that created them.
If you convert yourself into a rock (or a utility monster), it's the decision to convert yourself th...
LW doesn't punish, it upvotes-if-interesting and then silently judges.
confidence / effort ratio
(Effort is not a measure of value, it's a measure of cost.)
The other side is forced to agree to that, just to get a little.
That's not how the ultimatum game works in non-CDT settings, you can still punish the opponent for offering too little, even at the cost of getting nothing in the current possible world (thereby reducing its weight and with it the expected cost). In this case it deters commitment racing.
The term is a bit conflationary. Persuasion for the masses is clearly a thing, its power is coordination of many people and turning their efforts to (in particular) enforce and propagate the persuasion (this works even for norms that have no specific persuader that originates them, and contingent norms that are not convergently generated by human nature). Individual persuasion with a stronger effect that can defeat specific people is probably either unreliable like cults or conmen (where many people are much less susceptible than some, and objective decept...
the impact of new Blackwell chips with improved computation
It's about world size, not computation, and has a startling effect that probably won't occur again with future chips, since Blackwell sufficiently catches up to models at the current scale.
But even then, OpenAI might get to ~$25bn annualized revenue that won't be going away
What is this revenue estimate assuming?
The projection for 2025 is $12bn at 3x/year growth (1.1x per month, so $1.7bn per month at the end of 2025, $3bn per month in mid-2026), and my pessimistic timeline above assumes...
Not knowing n(-) results in not knowing expected utility of b (for any given b), because you won't know how the terms a(n(a), n(a)) are formed.
(And also the whole being given numeric codes of programs as arguments thing gets weird when you are postulated to be unable to interpret what the codes mean. The point of Newcomblike problems is that you get to reason about behavior of specific agents.)
I can't think of any reason to give a confident, high precision story that you don't even believe in!
Datapoints generalize, a high precision story holds gears that can be reused in other hypotheticals. I'm not sure what you mean by the story being presented as "confident" (in some sense it's always wrong to say that a point prediction is "confident" rather than zero probability, even if it's the mode of a distribution, the most probable point). But in any case I think giving high precision stories is a good methodology for communicating a framing, point...
Question 1: Assume you are program b. You want to maximize the money you receive. What should you output if your input is (x,x) (i.e., the two numbers are equal)?
Question 2: Assume you are the programmer writing program b. You want to maximize the expected money program b receives. How should you design b to behave when it receives an input (x,x)?
Do you mean to ask how b should behave on input (n(b), n(b)), and how b should be written to behave on input (n(b), n(b)) for that b?
If x differs from n(b), it might matter in some subtle ways but not straig...
Official policy documents from AI companies can be useful in bringing certain considerations into the domain of what is allowed to be taken seriously (in particular, by the governments), as opposed to remaining weird sci-fi ideas to be ignored by most Serious People. Even declarations by AI company leaders or Turing award winners of Nobel laureates or some of the most cited AI scientists won't by themselves have that kind of legitimizing effect. So it's not necessary for such documents to be able to directly affect actual policies of AI companies, they can still be important in affecting these policies indirectly.
I think it's overdetermined by Blackwell NVL72/NVL36 and long reasoning training that there will be no AI-specific "crash" until at least late 2026. Reasoning models want a lot of tokens, but their current use is constrained by cost and speed, and these issues will be going away to a significant extent. Already Google has Gemini 2.5 Pro (taking advantage of TPUs), and within a few months OpenAI and Anthropic will make reasoning variants of their largest models practical to use as well (those pretrained at the scale of 100K H100s / ~3e26 FLOPs, meaning GPT-...
I think the idea of effective FLOPs has more narrow applicability than what you are running with, many things that count as compute multipliers don't scale. They often only hold for particular capabilities that stop being worth boosting separately at greater levels of scale, or particular data that stops being available in sufficient quantity. An example of a scalable compute multiplier is MoE (even as it destroys data efficiency, and so damages some compute multipliers that rely on selection of high quality data). See Figure 4 in the Mamba paper for anoth...
spending tens of billions of dollars to build clusters that could train a GPT-6-sized model in 2028
Traditionally steps of GPT series are roughly 100x in raw compute (I'm not counting effective compute, since it's not relevant to cost of training). GPT-4 is 2e25 FLOPs. Which puts "GPT-6" at 2e29 FLOPs. To train a model in 2028, you would build an Nvidia Rubin Ultra NVL576 (Kyber) training system in 2027. Each rack holds 576 compute dies at about 3e15 BF16 FLOP/s per die[1] or 1.6e18 FLOP/s per rack. A Blackwell NVL72 datacenter costs about $4M per rack t...
probability mass for AI that can automate all AI research is in the 2030s ... broadly due to the tariffs and ...
Without AGI, scaling of hardware runs into the financial ~$200bn individual training system cost wall in 2027-2029. Any tribulations on the way (or conversely efforts to pool heterogeneous and geographically distributed compute) only delay that point slightly (when compared to the current pace of increase in funding), and you end up in approximately the same place, slowing down to the speed of advancement in FLOP/s per watt (or per dollar). Without transformative AI, anything close to the current pace is unlikely to last into the 2030s.
With AI assistance, the degree to which an alternative is ready-to-go can differ a lot compared to its prior human-developed state. Also, an idea that's ready-to-go is not yet an edifice of theory and software that's ready-to-go in replacing 5e28 FLOPs transformer models, so some level of AI assistance is still necessary with 2 year timelines. (I'm not necessarily arguing that 2 year timelines are correct, but it's the kind of assumption that my argument should survive.)
The critical period includes the time when humans are still in effective control of the...
The most important thing about Llama 4 is that the 100K H100s run that was promised got canceled, and its flagship model Behemoth will be a 5e25 FLOPs compute optimal model[1] rather than a ~3e26 FLOPs model that a 100K H100s training system should be able to produce. This is merely 35% more compute than Llama-3-405B from last year, while GPT-4.5, Grok 3 and Gemini 2.5 Pro are probably around 3e26 FLOPs or a bit more. They even explicitly mention that it was trained on 32K GPUs (which must be H100s). Since Behemoth is the flagship model, a bigger model got...
haven't heard this said explicitly before
Okay, this prompted me to turn the comment into a post, maybe this point is actually new to someone.
prioritization depends in part on timelines
Any research rebalances the mix of currently legible research directions that could be handed off to AI-assisted alignment researchers or early autonomous AI researchers whenever they show up. Even hopelessly incomplete research agendas could still be used to prompt future capable AI to focus on them, while in the absence of such incomplete research agendas we'd need to rely on AI's judgment more completely. So it makes sense to still prioritize things that have no hope at all of becoming practical for decades ...
"Revenue by 2027.5" needs to mean "revenue between summer 2026 and summer 2027". And the time when the $150bn is raised needs to be late 2026, not "2027.5", in order to actually build the thing by early 2027 and have it completed for several months already by mid to late 2027 to get that 5e28 BF16 FLOPs model. Also Nvidia would need to have been expecting this or similar sentiment elsewhere months to years in advance, as everyone in the supply chain can be skeptical that this kind of money actually materializes by 2027, and so that they need to build addit...
A 100K H100s training system is a datacenter campus that costs about $5bn to build. You can use it to train a 3e26 FLOPs model in ~3 months, and that time costs about $500M. So the "training cost" is $500M, not $5bn, but in order to do the training you need exclusive access to a giant 100K H100s datacenter campus for 3 months, which probably means you need to build it yourself, which means you still need to raise the $5bn. Outside these 3 months, it can be used for inference or training experiments, so the $5bn is not wasted, it's just a bit suboptimal to ...
GPT-4.5 might've been trained on 100K H100s of the Goodyear Microsoft site ($4-5bn, same as first phase of Colossus), about 3e26 FLOPs (though there are hints in the announcement video it could've been trained in FP8 and on compute from more than one location, which makes up to 1e27 FLOPs possible in principle).
Abilene site of Crusoe/Stargate/OpenAI will have 1 GW of Blackwell servers in 2026, about 6K-7K racks, possibly at $4M per rack all-in, for the total of $25-30bn, which they've already raised money for (mostly from SoftBank). They are projecting abo...
Your point is one of the clues I mentioned that I don't see as comparably strong to the May 2023 paper, when it comes to prediction of loss/perplexity. The framing in your argument appeals to things other than the low-level metric of loss, so I opened my reply with focusing on it rather than the more nebulous things that are actually important in practice. Scaling laws work with loss the best (holding across many OOMs of compute), and repeating 3x rather than 7x (where loss first starts noticeably degrading) gives some margin of error. That is, a theoretic...
I meant "realiable agents" in the AI 2027 sense, that is something on the order of being sufficient for automated AI research, leading to much more revenue and investment in the lead-up rather than stalling at ~$100bn per individual training system for multiple years. My point is that it's not currently knowable if it happens imminently in 2026-2027 or at least a few years later, meaning I don't expect that evidence exists that distinguishes these possibilities even within the leading AI companies.
The reason Rubin NVL576 probably won't help as much as the current transition from Hopper is that Blackwell NVL72 is already ~sufficient for the model sizes that are compute optimal to train on $30bn Blackwell training systems (which Rubin NVL144 training systems probably won't significantly leapfrog before Rubin NVL576 comes out, unless there are reliable agents in 2026-2027 and funding goes through the roof).
when we get 576 (194 gpus)
The terminology Huang was advocating for at GTC 2025 (at 1:28:04) is to use "GPU" to refer to compute dies rather than...
The solution is increase in scale-up world size, but the "bug" I was talking about is in how it used to be too small for the sizes of LLMs that are compute optimal at the current level of training compute. With Blackwell NVL72, this is no longer the case, and shouldn't again become the case going forward. Even though there was a theoretical Hopper NVL256, for whatever reason in practice everyone ended up with only Hopper NVL8.
The size of the effect of insufficient world size[1] depends on the size of the model, and gets more severe for reasoning models on ...
The loss goes down; whether that helps in some more legible way that also happens to be impactful is much harder to figure out. The experiments in the May 2023 paper show that training on some dataset and training on a random quarter of that dataset repeated 4 times result in approximately the same loss (Figure 4). Even 15 repetitions remain useful, though at that point somewhat less useful than 15 times more unique data. There is also some sort of double descent where loss starts getting better again after hundreds of repetitions (Figure 9 in Appendix D)....
I think Blackwell will change the sentiment by late 2025 compared to 2024, with a lot of apparent progress in capabilities and reduced prices (which the public will have a hard time correctly attributing to Blackwell). In 2026 there will be some Blackwell-trained models, using 2x-4x more compute than what we see today (or what we'll see more of in a few weeks to months with the added long reasoning option, such as GPT-4.5 with reasoning).
But then the possibilities for 2027 branch on whether there are reliable agents, which doesn't seem knowable either way ...
The announcement post says the following on the scale of Behemoth:
we focus on efficient model training by using FP8 precision, without sacrificing quality and ensuring high model FLOPs utilization—while pre-training our Llama 4 Behemoth model using FP8 and 32K GPUs, we achieved 390 TFLOPs/GPU. The overall data mixture for training consisted of more than 30 trillion tokens
This puts Llama 4 Behemoth at 5e25 FLOPs (30% more than Llama-3-405B), trained on 32K H100s (only 2x more than Llama-3-405B) instead of the 128K H100s (or in any case, 100K+) they shou...
For me a specific crux is scaling laws of R1-like training, what happens when you try to do much more of it, which inputs to this process become important constraints and how much they matter. This working out was extensively brandished but not yet described quantitatively, all the reproductions of long reasoning training only had one iteration on top of some pretrained model, even o3 isn't currently known to be based on the same pretrained model as o1.
The AI 2027 story heavily leans into RL training taking off promptly, and it's possible they are resonati...
Non-Google models of late 2027 use Nvidia Rubin, but not yet Rubin Ultra. Rubin NVL144 racks have the same number of compute dies and chips as Blackwell NVL72 racks (change in the name is purely a marketing thing, they now count dies instead of chips). The compute dies are already almost reticle sized, can't get bigger, but Rubin uses 3nm (~180M Tr/mm2) while Blackwell is 4nm (~130M Tr/mm2). So the number of transistors per rack goes up according to transistor density between 4nm and 3nm, by 1.4x, plus better energy efficiency enables higher clock speed, m...
Thanks for the comment Vladimir!
[...] for the total of 2x in performance.
I never got around to updating based on the GTC 2025 announcement but I do have the Blackwell to Rubin efficiency gain down as ~2.0x adjusted by die size so looks like we are in agreement there (though I attributed it a little differently based on information I could find at the time).
So the first models will start being trained on Rubin no earlier than late 2026, much more likely only in 2027 [...]
Agreed! I have them coming into use in early 2027 in this chart.
...This predic
Beliefs held by others are a real phenomenon, so tracking them doesn't give them unearned weight in attention, as long as they are not confused with someone else's beliefs. You can even learn things specifically for the purpose of changing their simulated mind rather than your own (in whatever direction the winds of evidence happen to blow).
The scale of training and R&D spending by AI companies can be reduced on short notice, while global inference buildout costs much more and needs years of use to pay for itself. So an AI slowdown mostly hurts clouds and makes compute cheap due to oversupply, which might be a wash for AI companies. Confusingly major AI companies are closely tied to cloud providers, but OpenAI is distancing itself from Microsoft, and Meta and xAI are not cloud providers, so wouldn't suffer as much. In any case the tech giants will survive, it's losing their favor that seems more likely to damage AI companies, making them no longer able to invest as much in R&D.
https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
It's "mainstream" here, described well many times before.
if we didn't have a capitalist system, then the entire point about profit motives, pride, and race dynamics wouldn't apply
Presence of many nations without a central authority still contributes to race dynamics.
In the hypothetical where the paper's results hold, reasoning model performance at pass@k will match non-reasoning model performance with the number of samples closer to the crossover point between reasoning and non-reasoning pass@k plots. If those points for o1 and o3 are somewhere between 50 and 10K (say, at ~200), then pass@10K for o1 might be equivalent to ~pass@400 for o1's base model (looking at Figure 2), while pass@50 for o3 might be equivalent to ~pass@100 for its base model (which is probably different from o1's base model).
So the difference of 2... (read more)