I expect "slow takeoff," which we could operationalize as the economy doubling over some 4 year interval before it doubles over any 1 year interval. Lots of people in the AI safety community have strongly opposing views, and it seems like a really important and intriguing disagreement. I feel like I don't really understand the fast takeoff view.
(Below is a short post copied from Facebook. The link contains a more substantive discussion. See also: AI impacts on the same topic.)
I believe that the disagreement is mostly about what happens before we build powerful AGI. I think that weaker AI systems will already have radically transformed the world, while I believe fast takeoff proponents think there are factors that makes weak AI systems radically less useful. This is...
I also take this approach to agent foundations, which is why I like to tie different agendas together. Studying AIXI is part of that because many other approaches can be described as "depart from AIXI in this way to solve this informally stated problem with AIXI."
I'm here from the future trying to decide how much to believe in and how common are Gods of Straight Lines, and curious if you could say more arguing about this.