I have ~15% probability humanity will invent artificial superintelligence (ASI) by 2030.
The recent announcement of the o3 model has updated me to 95%, with most of the 5% being regulatory slow downs involving unprecedented global cooperation.
Sorry if this is rude, but your comment doesn’t engage with any of the arguments in the post, or make arguments in favour of your own position. If you’re just stating your view without proof then sure, that works.
2024-12-12
My AI timelines
DISCLAIMER
This document is mainly aimed at lesswrong (LW) rationalists / effective altruists / adjacent people, since a lot of my work is culturally downstream of theirs, and a lot of my potential research collaborators exist in their communities. This doc will make less sense if you haven't encountered their writings before.
Most of this doc is guesswork rather than models I have a lot of confidence in. Small amounts of evidence could upend my entire view of these topics.
If you have evidence that my view is wrong, please tell me. Not having to spend any more time thinking about AI will improve my quality of life. I am being completely serious when I say I might thank you till the day I die, if you persuade me either way. I can also pay you atleast $1000 for convincing me, although we'll have to discuss the details if you really wanna be paid.
The last time I read papers on this topic was early-2023, it's possible I'm not up-to-speed on any of the latest research. Feel free to send me anything that's relevant.
I have ~15% probability humanity will invent artificial superintelligence (ASI) by 2030.
Conditional on ASI being invented by 2030, I expect ~30% probability it will kill everyone on Earth soon after. In total that's ~5% probability of humanity killed by ASI by 2030.
I have chosen not to work on this problem myself. This is downstream of ~15% not being large enough and the problem not capturing my curiosity enough. I am not a utilitarian who blindly picks the biggest problem to solve, although I do like picking bigger problems over smaller ones. If you are working on safety of AGI/ASI, I think you are doing important work and I applaud you for the same.
Reasons for my beliefs
Mildly relevant rant on consensus-building: Both these probabilities seem to involve dealing with what I'd call deep priors. "Assume I take a random photo of Mars and a random pixel from that photo, what is the probability its RGB value has higher Green than Blue?" Humans tend to agree better on prior probabilities when there's enough useful data around the problem to form a model of it. Humans tend to agree less on prior probabilities when they're given a problem with very little obviously useful data, and need to rely on a lifetime of almost-useless-but-not-completely-useless data instead. The "deepest" prior in this ontology is the universal prior.
~15% is lower than what many people in EA/LW communities assign, because I reject a lot of the specific models they use to forecast higher likelihood of ASI.
Discontinuous progress in human history by Katja Grace is the closest thing I could find to work that tries evaluating this prior probability. I have not spent a lot of time searching though. Convincing me either way will require publishing or pointing me to a lot more work of this type. Alternatively you have to provide me a gears-level model that explains why the LLM scaling laws empirically hold.
~15% is higher than what many AI researchers assign, because I reject a lot of the specific reasons they give for why LLM scaling cannot possibly achieve ASI
~30% probability for extinction conditional on ASI invention by 2030 is because I am more optimistic about boxing an ASI than some LW rationalists. I do believe misalignment happens by default with high probability.