What a compute-centric framework says about AI takeoff speeds
As part of my work for Open Philanthropy I’ve written a draft report on AI takeoff speeds, the question of how quickly AI capabilities might improve as we approach and surpass human-level AI. Will human-level AI be a bolt from the blue, or will we have AI that is nearly as capable many years earlier? Most of the analysis is from the perspective of a compute-centric framework, inspired by that used in the Bio Anchors report, in which AI capabilities increase continuously with more training compute and work to develop better AI algorithms. This post doesn’t summarise the report. Instead I want to explain some of the high-level takeaways from the research which I think apply even if you don’t buy the compute-centric framework. The framework h/t Dan Kokotajlo for writing most of this section This report accompanies and explains https://takeoffspeeds.com (h/t Epoch for building this!), a user-friendly quantitative model of AGI timelines and takeoff, which you can go play around with right now. (By AGI I mean “AI that can readily[1] perform 100% of cognitive tasks” as well as a human professional; AGI could be many AI systems working together, or one unified system.) Takeoff simulation with Tom’s best-guess value for each parameter. The framework was inspired by and builds upon the previous “Bio Anchors” report. The “core” of the Bio Anchors report was a three-factor model for forecasting AGI timelines: Dan’s visual representation of Bio Anchors report 1. Compute to train AGI using 2020 algorithms. The first and most subjective factor is a probability distribution over training requirements (measured in FLOP) given today’s ideas. It allows for some probability to be placed in the “no amount would be enough” bucket. 1. The probability distribution is shown by the coloured blocks on the y-axis in the above figure. 2. Algorithmic progress. The second factor is the rate at which new ideas come along, lowering AGI training requirements. Bio Anchors models this
thanks, i read the Phil Trammell critique and it helped me understand your position.
My summary is that the points 1,2,3,4,6,9 were basically saying "AI might be misaligned and agentic and that could be bad for humans" and "maybe the institutions of law / democracy / markets will break down".
I get that if you think these things are pretty likely, the analysis is less interesting and you want the assumptions flagged.
So overall i agree Phil should have a disclaimer like "i'm assuming we'll get a great solution to alignment and that the current institutions of law + markets survive", but i don't think he needs to list out like 10 assumptions