Suppose AGI happens in 2035 or 2045. Will takeoff be faster, or slower, than if it happens in 2027?
Intuition for slower: In the models of takeoff that I've seen, longer timelines is correlated with slower takeoff. Because they share a common cause: the inherent difficulty of training AGI. Or to put it more precisely, there's all these capability milestones we are interested in, such as superhuman coders, full AI R&D automation, AGI, ASI, etc. and there's this underlying question of how much compute, data, tinkering, etc. will be needed to get from milestone 1 to 2 to 3 to 4 etc., and these things are probably all correlated (at least in our current epistemic state). Moreover, in the 2030's the rate of growth of inputs such as data, compute, etc. will have slowed, so all else equal the pace of takeoff should be slower.
Intuition for faster: That was all about correlation. Causally, it seems clear that longer timelines cause faster takeoff. Because there's more compute lying around, more data available, more of everything. If you have (for example) just reached the full automation of AI R&D, and you are trying to do the next big paradigm shift that'll take you to ASI, you'll have orders of magnitude more compute and data to experiment with (and your automated AI researchers be both more numerous and serially faster!) if it's 2035 instead of 2027. "So what?" the reply goes. "Correlation is what matters for predicting how fast takeoff will be in 2035 or 2045. Yes you'll have + 3 OOMs more resources with which to do the research, but (in expectation) the research will require (let's say) +6 OOMs more resources." But I'm not fully satisfied with this reply. Apparent counterexample: Consider the paradigm of brainlike AGI, in which the tech tree is (1) Figure out how the human brain works, (2) Use those principles to build an AI that has similar properties, i.e. similar data-efficient online learning blah blah blah, and (3) train that AI in some simulation environment si