Short timelines, slow takeoff vs. Long timelines, fast takeoff
Due to chain-of-thought in the current paradigm seeming like great news for AI safety, some people seem to have the following expectations:
Short timelines: CoT reduces risks, but shorter preparation time increases the odds of catastrophe.
Long timelines: the current paradigm is not enough; therefore, CoT may stop being relevant, which may increase the odds of catastrophe. We have more time to prepare (which is good), but we may get a faster takeoff than the current paradigm makes it seem like. And therefore, discontinuous takeoff may introduce significantly more risks despite longer timelines.
So, perhaps counterintuitively for some, you could have these two groups:
1. Slow (smooth, non-discontinuous) takeoff, low p(doom), takeoff happens in the next couple of years. [People newer to AI safety seem more likely to expect this imo]
Vs.
2. Fast takeoff (discontinuous capability increase w.r.t. time), high p(doom), (actual) takeoff happens in 8-10 years. [seems more common under the MIRI / traditional AI safety researchers cluster]
I’m not saying those are the only two groups, but I think part of it speaks to how some people are feeling about the current state of progress and safety.
As a result, I think it’s pretty important to gain better clarity on whether we expect the current paradigm to scale without fundamental changes, and, if not, to understand what would come after it and how it would change the risks.
That’s not to say we shouldn’t weigh short timelines more highly due to being more immediate, but there are multiple terms to weigh here.