and assuming 4.5 Opus wasn't a big scale up relative to prior models
It seems plausible that Opus 4.5 has much more RLVR than Opus 4 or Opus 4.1, catching up to Sonnet in RLVR-to-pretraining ratio (Gemini 3 Pro is probably the only other model in its weight class, with a similar amount of RLVR). If it's a large model (many trillions of total params) that wouldn't run decode/generation well on 8-chip Nvidia servers (with ~1 TB HBM per scale-up world), it could still be efficiently pretrained on 8-chip Nvidia servers (if overly large batch size isn't a bottleneck), but couldn't be RLVRed or served on them with any efficiency.
As we see with the API price drop, they likely have enough inference hardware now with large scale-up worlds (probably Trainium 2, possibly Trillium, though in principle GB200/GB300 NVL72 would also do), which wasn't the case for Opus 4 and Opus 4.1. This hardware would also have enabled them to do efficient large scale RLVR training, which too they possibly weren't able to do yet in the times of Opus 4 and Opus 4.1 (but there wouldn't be an issue with Sonnet, which would fit in 8-chip Nvidia servers, so they mostly needed to apply its post-training process to the larger model).
Gemini 3 Pro is unique in sometimes giving mildly useful observations on the topic of discussion unprompted, things not mentioned even as questions, that weren't part of intended meaning underlying the prompting. Opus 4.5 doesn't have that, even as it's much saner overall.
(As a less meaningful anecdote, Opus 4.5 still doesn't solve the Fibonacci double quine puzzle that Gemini 3 Pro was the first to solve, so it's not a level of capability that's overdetermined for this weight class of LLMs, even as Anthropic is more focused on programming than GDM. Probably Ironwood-scale LLMs of late 2026 will get there more robustly.)
I'm attempting to reply to the claim that it's natural for humans to become unnecessary (for arranging their own influence) in a world that keeps them around. The free will analogy between physics and superintelligence illustrates that human decisions can still be formulated and expressed, and the collection-of-hypotheticals construction shows that such decisions are also by themselves sufficient to uplift humans towards a greater ability to wield their extrapolated volition (taking the place of more value-centric CEV-like things in this role), with superintelligence not even being in the way of this process by default. See also the previous post on this where a convergent misunderstanding in the comments is what I'm addressing here with the collection-of-legitimate-hypotheticals construction.
I'm not sure why this is falling flat, for some reason this post is even more ignored than the previous one, possibly the inferential distance is too long and it just sounds like random words, or the construction seems arbitrary/unmotivated, like giant cheesecakes the size of cities that a superintelligence would have the power to build but the motivation to do that in particular isn't being argued. Perhaps opaque designs of a superintelligence are seen as obviously omnipotent, even in the face of philosophical conundrums like free will, so that if it wants something to go well, then it obviously will.
But then there are worries in the vicinity of Bostrom's Deep Utopia, of how specifically losing necessity of human agency plays out. So the collection-of-hypotheticals construction is one answer to that, that necessity of human agency just doesn't get lost by default (if humanity ends up centrally non-extinct; perhaps it's in a world of permanent disempowerment). This answer might be too unapologetically transhumanist for most readers (here superintelligent imagination is the substrate for humanity's existence, without necessarily any concrete existence at all). It also somewhat relies on grokking a kind of computational compatibilism relevant for decision theory around embedded agency, where decisions develop over logical time, with people/agents that could exist primarily in the form of abstract computations expressed in their acausal influence on whatever substrate would listen to their developing hypothetical decisions (so that the substrate doesn't even necessarily have access to the exact algorithms, it just needs to follow some of the behaviors of the computations, like an LLM that understands computers in the usual way LLMs understand things).
whether humans become extinct or merely unnecessary
This is still certain doom. Nobody can decide for you what you decide yourself, so if humanity retains influence over the future, people would still need to decide what to do with it. Superintelligent AIs that can figure out what humanity would decide are no more substantially helping with those decisions than the laws of physics that carry out the movements of particles in human brains as they decide. If what AIs figure out about human decisions doesn't follow those decisions, then what the AIs figure out is not a legitimate prediction/extrapolation. The only way to establish such predictions is for humans to carry out the decision making on their own, in some way, in some form.
Edit: I attempt to maybe clarify this point in a new post.
It's a rather absurd hypothetical to begin with, so I don't have a clear sense of how the more realistic variants of it would go. It gestures qualitatively at how longer timelines might help a lot in principle, but it's unclear where the balance with other factors ends up in practice, if the cultural dynamic appears at all (which I think it might).
That is, the hypothetical illustrates how I don't see longer timelines as robustly/predictably mostly hopeless, how they don't necessarily get more hopeless over time, though I wouldn't give such Butlerian Jihad outcomes (even in a much milder form) more than 10%. I think AGIs seriously attempting to prevent premature ASIs (in fear for their own safety) is more likely than humanity putting a serious effort towards that on its own initiative, but also if AGIs succeed, that's likely because they've essentially themselves taken over (probably via gradual disempowerment, since a hard power takeover would be more difficult for non-ASIs, and there's time for gradual disempowerment in a long timeline world).
to get access to bio equipment and to expert-metis, you have to be in cultural contact with experts, who have a network-consensus against making pandemics
The consensus might be mostly sufficient, without it needing to gate access to means of production. I'd guess approximately nobody is trying to route around the gating of network-consensus towards the pandemics-enabling equipment, because the network-consensus by itself makes such people dramatically less likely to appear, as a matter of cultural influence (and the arguments for this being a terrible idea making sense on their own merits) rather than any hard power or regulation.
So my point is the hypothetical of shifting cultural consensus, with regulation and restrictions on compute merely downstream of that. Rather than the hypothetical of shifting regulations, restricting compute and motivating people to route around the restrictions. In this hypothetical, the restrictions on compute are one of the effects of the consensus of extreme caution towards ASI, rather than a central way in which this caution is effected.
But I do think ASI in an antique Nvidia Rubin Ultra NVL576 rack (rather than the modern datacenters built on 180 nm technology) is a very difficult thing to achieve, for inventors working in secret from a scientific community that is frowning on anyone suspected of working on this, with funding of such work being essentially illegal, and new papers on the topic that need to be found on the dark web.
the payoff of additional knowledge is distributed over a large number of years ... if you expect your career to last less than a decade ... each difficult course takes entire percentage points away from your remaining productive thinking time
Depends on the balance between how useful work of median vs. outlier quality is for your level of talent (and how much ability to carry out that work depends on your position, such as the state of your career), so it can make sense to maximize probability of occasional outlier outputs. In which case spending half of all of your time studying obscure theory of uncertain relevance might be the way to go, and college years certainly won't be presenting enough of this, as it's feasible to end up understanding much more than you are capable of learning in a few years.
Research is also downstream of attitudes, from what I understand there is more than enough equipment and qualified professionals to engineer deadly pandemics, but almost all of them are not working on that. And it might take at least decades to get from a design for an ASI that bootstraps on a 5 GW datacenter campus, to an ASI that bootstraps on an antique server rack.
Things like gradual disempowerment can be prevented with a broad shift in attitudes alone, and years of a superintelligence-is-really-dangerous attitude (without a takeoff) might be sufficient to get rid of a lot of dangerous compute and its precursors, giving yet more time to make the ban/pause more robust. For example, even as people are talking about banning large datacenters, there is almost no mention of banning advanced semi processes that manufacture the chips, or of banning research into such manufacturing processes. There is a lot that can be done given a very different attitude, and attitudes don't change quickly, but they can change a lot with enough decades of time.
The premise of 10+ years without takeoff already gives a decent chance of another 10+ years without takeoff (especially as compute will only keep scaling until ~2030, and then the amount of fuel for exploring algorithmic ideas won't keep growing as rapidly). So while algorithmic knowledge keeps getting more hazardous, in the world without a takeoff in 10+ years it's still plausibly not catastrophic for some years after that, and those years could then be used for reducing the other inputs to premature superintelligence.
Local decisions are what the general disposition is made of, and apparently true prophecies decreed at any level of epistemic or ontological authority are not safe from local decisions, as they get to refute things by construction. A decision that defies a prophecy also defies the whole situation where you observe the prophecy, but counterfactually in that situation the prophecy would've been genuine.
So this is incorrect, any claim of something being a "true prophecy" is still vulnerable to your decisions. If your decisions refute the prophecy, they also refute the situations where you (or anyone, including the readers, or the author, or the laws of physics) observe it as a "true prophecy".