Double Cruxing the AI Foom debate
This post is generally in response to both Paul Christiano’s post and the original foom debate. I am trying to find the exact circumstances and conditions which would point to fast or slow AI takeoff. Epistemic Status: I have changed my mind a few times about this. Currently thinking that both fast and slow takeoff scenarios are plausible, however I put more probability mass on slow takeoff under normal circumstances. Just to be clear about the terminology. When we say fast take-off, we talk about an AGI that achieves a decisive unstoppable advantage from a “seed AI” to a “world optimizing AGI” in less than a week. That is, it can gather enough resources to improve itself to the point where it’s values come to dominate human ones for the direction of the future. That does not necessarily mean that the world could end in the week if everything went badly, rather than the process of human decline would become irreversible at that point. From here When we talk about slow takeoff, we are talking about a situation where the pace of economic doubling is being gradually reduced to less than 15 years, potentially to 1-2 years. Note, this “slow” is still a fundamental transition of the human society on the order of the agricultural or industrial revolution. Most people in the broader world think of significantly slower transitions that what the AI community considers “slow”. “Middle takeoff” is something in between the two scenarios. There are several analogies that people use to favor slow take-off, such as: a) Economic Model of gradually replacing humans out of the self-improvement loop b) Corporate model of improvement or lack there of c) Previous shifts in human society There are several analogies that people use to favor fast take-off, such as: a) Development of nuclear weapons b) The actual model of how nuclear weapons work c) Development of human brains compared to evolution There are also several complicating factors, such as: a) The exact geo-polit
It's Pasha Kamyshev, btw :) Main engagement is through
1. reading MIRI papers, especially the older agent foundations agenda papers
2. following the flashy developments in AI, such as Dota / Go RL and being somewhat skeptical of the "random play" part of the whole thing (other things are indeed impressive)
3. Various math text books: category theory for programmers, probability the logic of science, and others
4. Trying to implement certain theory in code (quantilizers, different prediction market mechanisms)
5. Statistics investigations into various claims of "algorithmic bias"
6. Conversations with various people in the community on the topic