When I hear the hypothesis that world GDP doubles in 4 years before it doubles in 1 year, I imagine a curve that looks like this:
I don't think that's the right curve to imagine
If AI is a perfect substitute for humans, then you would have (output) = (AI output) + (human output). If AI output triples every year, then the first time you will have a doubling of the economy in 1 year is when AI goes from 100% of human output to 300% of human output. Over the preceding 4 years you will have the growth of AI from ~0 human output to ~100% of human output, and on top of that you would have had human growth, so you would have had more than a doubling of the economy.
On the perfect complements model the question is roughly whether AI output is growing more or less than 3x per year.
When I wrote this post I gave a 30% to fast takeoff according to the 1-year before 4-year operationalization. I would now give that more like a 40-50% chance. However almost all of my fast takeoff probability is now concentrated on worlds that are quite close to the proposed boundary. My probability on scenarios like the "teleportation" discussed by Rob Bensinger here has continued to fall and is now <10% though it depends exactly how you operationalize them.
I think right now AGI economic output is growing more quickly than 3x/year. In reality there are a number of features that I think will push us to a significantly slower takeoff than this model would imply:
On the other hand there are complicating factors that push towards a faster takeoff:
Overall it seems fairly likely that there will be some ways of measuring output for which we have a fast takeoff and plausible ways for which we have a slow takeoff, basically depending on how you value AI R&D relative to other kinds of cognitive output. A natural way to do so is normalizing human cognitive output to be equal in different domains. I think would be a fair though not maximally charitable operationalization of what I said---more charitable would be the valuations assigned by alien accountants looking at earth and assessing net present values, and on that definition I think it's fairly unlikely to get a fast takeoff.
I think probably the biggest question is how large AGI revenue gets prior to transformative AI. I would guess right now it is in the billions and maybe growing by 2-4x / year. If it gets up to 10s of trillions then I think it is very likely you will have a slow takeoff according to this operationalization, but if it only gets up to tens or hundreds of billions then it will depend strongly on the valuation of speculative investments in AGI. (Right now total valuations of AGI as a technology are probably in the low hundreds of billions.)
But I do think it's a little harder to draw a plausible picture where AI progress shows up in GDP well before it becomes superintelligent.
I don't understand this. It seems extremely easy to imagine a world where AGI systems add trillions rather than billions of dollars of value well before becoming superintelligent. I feel like we can just list the potential applications to add up to trillions, and we can match that by extrapolating current growth rates for 5-10 years. I think the main reason to find slow takeoff hard to imagine is if you have a hard time imagining transformative AI in the 2030s rather than 2020s, but that's my modal outcome and so not very hard to imagine.
Thanks for the reply. If I'm understanding correctly, leaving aside the various complications you bring up, are you describing a potential slow growth curve that (to a rough approximation) looks like:
This story sounds plausible to me, and it basically fits the slow-takeoff operationalization.
Complementarity between humans and AIs. I see plausible arguments for low complementarity owing to big advantages from full automation, but it seems pretty clear there will be some complementarity, i.e. that output will be larger than (AI output) + (human output). Today there is obviously massive complementarity. Even modest amounts of complementarity significantly slow down takeoff. I believe there is a significant chance (perhaps 30%?) that complementarity from horizon length alone is sufficient to drive an unambiguously slow takeoff.
This is a big crux, in that I believe complementarity is very low, low enough that in practice, it can be ignored.
And I think Amdahl's law severely suppresses complementarity, and this is a crux, in that if I changed my mind about this, then I think slow takeoff is likely.
One common definition of a slow AGI takeoff is
(For example, this Metaculus question)
But this might not happen even if AGI develops slowly.
For illustration, divide the economy into the part driven by AI and the part driven by other stuff. I imagine a "slow" takeoff looking like this, where AI progress accelerates faster than the rest of the economy and eventually takes over:
But in this world, AI doesn't have a major effect on the economy until it's just about to reach the transformative level. It might be slow in terms of technological progress, but it's fast in terms of GDP.
When I hear the hypothesis that world GDP doubles in 4 years before it doubles in 1 year, I imagine a curve that looks like this:
Which just doesn't really make sense.
I'm not saying a slow takeoff will definitely look fast. I'm not saying that believing in a slow economic takeoff requires drawing a silly graph like the second one above. But I do think it's a little harder to draw a plausible picture where AI progress shows up in GDP well before it becomes superintelligent.