For a long time, when I heard "slow takeoff", I assumed it meant "takeoff that takes longer calendar time than fast takeoff." (i.e. what is now referred to more often as "short timelines" vs "long timelines."). I think Paul Christiano popularized the term, and it so happened he both expected to see longer timelines and smoother/continuous takeoff.

I think it's at least somewhat confusing to use the term "slow" to mean "smooth/continuous", because that's not what "slow" particularly means most of the time.

I think it's even more actively confusing because "smooth/continuous" takeoff not only could be faster in calendar time, but, I'd weakly expect this on average, since smooth takeoff means that AI resources at a given time are feeding into more AI resources, whereas sharp/discontinuous takeoff would tend to mean "AI tech doesn't get seriously applied towards AI development until towards the end."

I don't think this is academic[1].

I think this has wasted a ton of time arguing past each other on LessWrong, and if "slow/fast" is the terminology that policy makers are hearing as they start to tune into the AI situation, it is predictably going to cause them confusion, at least waste their time, and quite likely lead many of them to approach the situation through misleading strategic frames that conflate smoothness and timelines.

Way back in Arguments about fast takeoff, I argued that this was a bad term, and proposed "smooth" and "sharp" takeoff were better terms. I'd also be fine with "hard" and soft" takeoff. I think "Hard/Soft" have somewhat more historical use, and maybe are less likely to get misheard as "short", so maybe use those.[2]

I am annoyed that 7 years later people are still using "slow" to mean "maybe faster than 'fast'." This is stupid. Please stop. I think smooth/sharp and hard/soft are both fairly intuitive (at the very least, more intuitive than slow/fast, and people who are already familiar with the technical meaning of slow/fast will figure it out).

I would be fine with "continuous" and "discontinuous", but, realistically, I do not expect people to stick to those because they are too many syllables. 

Please, for the love of god, do not keep using a term that people will predictably misread as implying longer timelines. I expect this to have real-world consequences. If someone wants to operationalize a bet about it having significant real-world consequences I would bet money on it.

Curves
The graph I posted in response to Arguments about fast takeoff 

 

  1. ^

    a term that ironically means "pointlessly pedantic."

  2. ^

    the last time I tried to write this post, 3 years ago, I got stuck on whether to argue for smooth/sharp or hard/soft and then I didn't end up posting it at all and I regret that. 

New Comment
23 comments, sorted by Click to highlight new comments since:
Pinned by Ben Pace

Okay since I didn't successfully get buy-in for a particular term before writing this post, here's a poll to agree/disagree vote on. (I'm not including Fast/Slow as an option by default but you can submit other options here and if you really want to fight for preserving it, seems fine).

.

Smooth/Sharp takeoff

Predictable/Unpredictable takeoff

Long/short takeoff

Long duration/short duration takeoff

Continuous/Discontinuous takeoff

Soft/Hard Takeoff

Fast/Slow takeoff

Gradual/hard

Gradual/Abrupt

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I also care a lot about the duration from AIs which are capable enough to 3x R&D labor to AIs which are capable enough to strictly dominate (and thus obsolete) top human scientists but which aren't necessarly very smarter. (I also care some about the duration between a bunch of different milestones and I'm not sure that my operationalizations of the milestones is the best one.)

Paul originally operationalized this as seeing an economic doubling over 4 years prior to a doubling within a year, but I'd prefer for now to talk about qualitative level of capabilities rather than also entangling questions about how AI will effect the world[2].

So, I'm tempted by "long duration" vs "short duration" takeoff, though this is pretty clumsy.


Really, there are bunch of different distinctions we care about with respect to takeoff and the progress of AI capabilities:

  • As discussed above, the duration from the first transformatively useful AIs to AIs which are generally superhuman. (And between very useful AIs to top human scientist level AIs.)
  • The duration from huge impacts in the world from AI (e.g. much higher GDP growth) to very superhuman AIs. This is like the above, but also folding in economic effects and other effects on the world at large which could come apart from AI capabilities even if there is a long duration takeoff in terms of capabilities.
  • Software only singularity. How much the singularity is downstream of AIs working on hardware (and energy) vs just software. (Or if something well described as a singularity even happens.)
  • Smoothness of AI progress vs jumpyness. As in, is progress driven by a larger number of smaller innovations and/or continuous scale ups rather being substantially driven by a small number of innovations and/or large phase changes that emerge with scale.
  • Predictability of AI progress. Even if AI progress is smooth in the sense of the prior bullet, it may not follow a very predictable trend if the rate of innovations or scaling varies a lot.
  • Tunability of AI capability. Is is possible to get a fully sweep of models which continuously interpolates over a range of capabilities?[3]

Of course, these properties are quite correlated. For instance, if the relevant durations for the first bullet are very short, then I also don't expect economic impacts until AIs are much smarter. And, if the singularity requires AIs working on increasing available hardware (software only doesn't work or doesn't go very far), then you expect more economic impact and more delay.


  1. One could think that there will be no delay between these points, though I personally think this is unlikely. ↩︎

  2. In short timelines, with a software only intelligence explosion, and with relevant actors not intentionally slowing down, I think I don't expect huge global GDP growth (e.g. 25% annualized global GDP growth rate) prior to very superhuman AI. I'm not very confident in this, but I think both inference availability and takeoff duration point to this. ↩︎

  3. This is a very weak property, though I think some people are skeptical of this. ↩︎

I think long duration is way too many syllables, and I think I have similar problems with this naming schema as Fast/Slow, but, if you were going to go with this naming schema I think just saying "short takeoff" and "long takeoff" seems about as clear ("duration" comes implied IMO)

I don't love "smooth" vs "sharp" because these words don't naturally point at what seem to me to be the key concept: the duration from the first AI capable of being transformatively useful to the first system which is very qualitatively generally superhuman[1]. You can have "smooth" takeoff driven by purely scaling things up where this duration is short or nonexistent.

I'm not sure I buy the distinction mattering?

Here's a few worlds:

  1. Smooth takeoff to superintelligence via scaling the whole way, no RSI
  2. Smooth takeoff to superintelligence via a mix of scaling, algorithmic advance, RSI, etc
  3. smoothish looking takeoff via scaling (like we currently see) but then suddenly the shape of the curve changes dramatically due to RSI or similar
  4. smoothish looking takeoff via scaling like we see, but, and then RSI is the mechanism by which the curve continues, but not very quickly (maybe this implies the curve actively levels off S-curve style before eventually picking up again)
  5. alt-world where we weren't even seeing similar types of smoothly advancing AI, and then there's abrupt RSI takeoff in days or months
  6. alt-world where we weren't seeing similar smooth scaling AI, and then RSI is the thing that initiates our current level of growth

At least with the previous way I'd been thinking about things, I think the worlds above that look smooth, I feel like "yep, that was a smooth takeoff."

Or, okay, I thought about it a bit more and maybe agree that "time between first transformatively-useful to superintelligence" is a key variable. But, I also think that variable is captured by saying "smooth takeoff/long timelines?" (which is approximately what people are currently saying?

Hmm, I updated towards being less confident while thinking about this.

Long takeoff and short takeoff sound strange to me. Maybe because they are too close to long timelines and short timelines.

Yeah I think the similarity of takeoff and timelines is maybe the real problem.

Like if Takeoff wasn’t two syllables that starts with T I might be happy with ‘short/long’ being the prefix for both.

But, I also think that variable is captured by saying "smooth takeoff/long timelines?" (which is approximately what people are currently saying?

You can have smooth and short takeoff with long timelines. E.g., imagine that scaling works all the way to ASI, but requires a ton of baremetal flop (e.g. 1e34) implying longer timelines and early transformative AI requires almost as much flop (e.g. 3e33) such that these events are only 1 year apart.

I think we're pretty likely to see a smooth and short takeoff with ASI prior to 2029. Now, imagine that you were making this exactly prediction up through 2029 in 2000. From the perspective in 2000, you are exactly predicting smooth and short takeoff with long timelines!

So, I think this is actually a pretty natural prediction.

For instance, you get this prediction if you think that a scalable paradigm will be found in the future and will scale up to ASI and on this scalable paradigm the delay between ASI and transformative AI will be short (either because the flop difference is small or because flop scaling will be pretty rapid at the relevant point because it is still pretty cheap, perhaps <$100 billion).

IMO, soft/smooth/gradual still convey wrong impressions. They still sound like "slow takeoff", they sound like the progress would be steady enough that normal people would have time to orient to what's happening, keep track, and exert control. As you're pointing out, that's not necessarily the case at all: from a normal person's perspective, this scenario may very much look very sharp and abrupt.

The main difference in this classification seems to be whether AI progress occurs "externally", as part of economic and R&D ecosystems, or "internally", as part of an opaque self-improvement process within a (set of) AI system(s). (Though IMO there's a mostly smooth continuum of scenarios, and I don't know that there's a meaningful distinction/clustering at all.)

From this perspective, even continuous vs. discontinuous don't really cleave reality at the joints. The self-improvement is still "continuous" (or, more accurately, incremental) in the hard-takeoff/RSI case, from the AI's own perspective. It's just that ~nothing besides the AI itself is relevant to the process.

Just "external" vs. "internal" takeoff, maybe? "Economic" vs. "unilateral"?

I do agree with that, although I don't know that I feel the need to micromanage the implicature of the term that much. 

I think it's good to try to find terms that don't have misleading connotations, but also good not to fight too hard to control the exact political implications of a term, partly because there's not a clear cutoff between being clear and being actively manipulative (and not obvious to other people which you're being, esp. if they disagree with you about the implications), and partly because there's a bit of a red queen race of trying to get terms into common parlance that benefit your agenda, and, like, let's just not.

Fast/slow just felt actively misleading.

I think the terms you propose here are interesting but a bit too opinionated about the mechanism involved. I'm not that confident those particular mechanisms will turn out to be decisive, and don't think the mechanism is actually that cruxy for what the term implies in terms of strategy.

If I did want to try to give it the connotations that actually feel right to me, I might say "rolling*" as the "smooth" option. I don't have a great "fast" one.

*although someone just said they found "rolling" unintuitive so shrug.

Assuming this is the important distinction, I like something like “isolated”/“integrated” better than either of those.

I think a problem with all the proposed terms is that they are all binaries, and one bit of information is far too little to characterize takeoff: 

  • One person's "slow" is >10 years, another's is >6 months. 
  • The beginning and end points are super unclear; some people might want to put the end point near the limits of intelligence, some people might want to put the beginning points at >2x AI R&D speed, some at 10, etc. 
  • In general, a good description of takeoff should characterize capabilities at each point on the curve.  

So I don't really think that any of the binaries are all that useful for thinking or communicating about takeoff. I don't have a great ontology for thinking about takeoff myself to suggest instead, but I generally try to in communication just define a start and end point and then say quantitatively how long this might take. One of the central ones I really care about is the time between wakeup and takeover capable AIs. 

wakeup = "the first period in time when AIs are sufficiently capable that senior government people wake up to incoming AGI and ASI" 

takeover capable AIs = "the first time there is a set of AI systems that are coordinating together and could take over the world if they wanted to" 

The reason to think about this period is that (kind of by construction) it's the time where unprecedented government actions that matter could happen. And so when planning for that sort of thing this length of time really matters. 

Of course, the start and end times I think about are both fairly vague. They also aren't purely a function of AI capabilities, and they care about stuff like "who is in government" and "how capable our institutions are at fighting a rogue AGI".  Also, many people believe that we never will get takeover capable AIs even at superintelligence.

I was finding it a bit challenging to unpack what you're saying here. I think, after a reread, that you're using ‘slow’ and ‘fast’ in the way I would use ‘soon’ and ‘far away’ (aka. referring to the time it will occur from the present). Is this read about correct?

If you’re trying to change the vocabulary you should have settled on an option.

I know, but last time I worried about that I ended up not writing the post at all and it seemed better make sure I published anything at all.

(edit: made a poll so as not to fully abdicate responsibility for this problem tho)