Meet inside The Shops at Waterloo Town Square - we will congregate in the indoor seating area next to the Your Independent Grocer with the trees sticking out in the middle of the benches (pic) at 7:00 pm for 20 minutes, and then head over to my nearby apartment's amenity room. If you've been around a few times, feel free to meet up at the front door of the apartment at 7:30 instead.
Topic
This week we'll be discussing The AI 2027 scenario from the AI Futures Project.
The summary: we think that 2025 and 2026 will see gradually improving AI agents. In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028... If AI is misaligned, it could move against humans as early as 2030 (ie after it’s automated enough of the economy to survive without us). If it gets aligned successfully, then by default power concentrates in a double-digit number of tech oligarchs and US executive branch members; this group is too divided to be crushingly dictatorial, but its reign could still fairly be described as technofeudalism. Humanity starts colonizing space at the very end of the 2020s / early 2030s.
Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028... Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
But even if things don’t go this fast (or if they go faster - another possibility we don’t rule out!) we’re also excited to be able to present a concrete scenario at all... we think this is a step up in terms of detail and ability to show receipts (see the Research section on the top for our model and justifications). We hope it will either make our 3-5 year timeline feel plausible, or at least get people talking specifics about why they disagree.
To what extent do you find the scenario plausible? Which aspects of the scenario do you find most credible, and which stretch your suspension of disbelief the most? What are your key cruxes?
How might the race for and advent of superintelligent AI reshape geopolitical competition beyond the US-China binary presented in this scenario? What roles might other nations, multinational corporations, or other actors play? Also consider that China has made public statements advocating for international governance frameworks for AI, and officials have expressed concerns about uncontrolled AI development. Does it seem fair to write this off?
If you were convinced that AI development could plausibly accelerate as rapidly as this scenario suggests (even if it's not the most likely outcome), what would you personally do differently over the next 3 years? What skills, resources, or positions would you prioritize?
Meet inside The Shops at Waterloo Town Square - we will congregate in the indoor seating area next to the Your Independent Grocer with the trees sticking out in the middle of the benches (pic) at 7:00 pm for 20 minutes, and then head over to my nearby apartment's amenity room. If you've been around a few times, feel free to meet up at the front door of the apartment at 7:30 instead.
Topic
This week we'll be discussing The AI 2027 scenario from the AI Futures Project.
Per Scott:
The summary: we think that 2025 and 2026 will see gradually improving AI agents. In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028... If AI is misaligned, it could move against humans as early as 2030 (ie after it’s automated enough of the economy to survive without us). If it gets aligned successfully, then by default power concentrates in a double-digit number of tech oligarchs and US executive branch members; this group is too divided to be crushingly dictatorial, but its reign could still fairly be described as technofeudalism. Humanity starts colonizing space at the very end of the 2020s / early 2030s.
Do we really think things will move this fast? Sort of no - between the beginning of the project last summer and the present, Daniel’s median for the intelligence explosion shifted from 2027 to 2028... Other members of the team (including me) have medians later in the 2020s or early 2030s, and also think automation will progress more slowly. So maybe think of this as a vision of what an 80th percentile fast scenario looks like - not our precise median, but also not something we feel safe ruling out.
But even if things don’t go this fast (or if they go faster - another possibility we don’t rule out!) we’re also excited to be able to present a concrete scenario at all... we think this is a step up in terms of detail and ability to show receipts (see the Research section on the top for our model and justifications). We hope it will either make our 3-5 year timeline feel plausible, or at least get people talking specifics about why they disagree.
Readings
One of:
With optional supplemental readings:
Discussion Questions
Posted on: