contact: jurkovich.nikola@gmail.com
I don't think I make the claim that a DSA is likely to be achieved by a human faction before AI takeover happens. My modal prediction (~58% as written in the post) for this whole process is that the AI takes over while the nations are trying to beat each other (or failing to coordinate).
In the world where the leading project has a large secret lead and has solved superalignment (an unlikely intersection) then yes, I think a DSA is achievable.
Maybe a thing you're claiming is that my opening paragraphs don't emphasize AI takeover enough to properly convey my expectations of AI takeover. I'm pretty sympathetic to this point.
One analogy for AGI adoption I prefer over "tech diffusion a la the computer" is "employee turnover."
Assume you have an AI system which can do everything any worker could do, including walking around in an office, reading social cues, and doing everything else needed for an excellent human coworker.
Then, barring regulation or strong taste based preferences, any future hiring round will hire such a robot over a human. Then, the question of when most of the company are robots is just the question of when most of the workforce naturally turns over through hiring and firing, because all new incoming employees will be robots.
Of course, in this world, there wouldn't just be typical hiring rounds, and there would probably be massive layoffs of humans to replace humans with robots. But typical hiring rounds provide an upper bound on how long the process would take. If the only way the company to "adopt" AGI is to hire human-shaped things, then the AGI will be human-shaped.
This is not what automation will actually look like, it's just an upper bound on how long it'd take. In practice the time between ASI and 90% US unemployment ignoring regulation and x-risk would be more like 0-2 years because a superintelligence could come up with very quick plans to automate the economy, and the incentives will be much stronger than in typical hiring/firing decisions.
Another consideration is takeoff speeds: TAI happening earlier would mean further progress is more bottlenecked by compute and thus takeoff is slowed down. A slower takeoff enables more time for humans to inform their decisions (but might also make things harder in other ways).
The base models seem to have topped out their task length around 2023 at a few minutes (see on the plot that GPT-4o is little better than GPT-4). Reasoning models use search to do better.
Note that Claude 3.5 Sonnet (Old) and Claude 3.5 Sonnet (New) have a longer time horizon than 4o: 18 minutes and 28 minutes compared to 9 minutes (Figure 5 in Measuring AI Ability to Complete Long Tasks). GPT-4.5 also has a longer time horizon.
Thanks for writing this.
Aside from maybe Nikola Jurkovic, nobody associated with AI 2027, as far as I can tell, is actually expecting things to go as fast as depicted.
I don't expect things to go this fast either - my median for AGI is in the second half of 2028, but the capabilities progression in AI 2027 is close to my modal timeline.
Note that the goal of "work on long-term research bets now so that a workforce of AI agents can automate it in a couple of years" implies somewhat different priorities than "work on long-term research bets to eventually have them pay off through human labor", notably:
(some of these push in opposite directions, e.g., engineering-heavy research outputs might be especially good for legibility)
I expect the trend to speed up before 2029 for a few reasons:
This has been one of the most important results for my personal timelines to date. It was a big part of the reason why I recently updated from ~3 year median to ~4 year median to AI that can automate >95% of remote jobs from 2022, and why my distribution overall has become more narrow (less probability on really long timelines).
Agreed, thanks! I've moved that discussion down to timelines and probabilities.