"Say we have intelligences that are narrowly human / superhuman on every task you can think of (which, for what it’s worth, I think will happen within 5-10 years). How long before we have self-replicating factories? Until foom? Until things are dangerously out of our control? Until GDP doubles within one year? In what order do these things happen?" (source)

When discussing Takeoff Speeds, I feel the debate often gets stuck in some kind of false dichotomy between Fast and Slow, where the crux seems to be about whether some self-improving AI would be able to foom without human assistance.

Instead, we could get a Moderate Takeoff (think months or years), where AI does not self-improve (by itself). Instead, there would be a reinforcing feedback loop where progress in AI leads to AI becoming increasingly useful to make progress in AI, with humans in the loop at all times.

On top of that, things might just happen privately at some AI lab for a few months until the AI is able to foom by itself, which will look like foom for everyone outside that lab.

AI Helping Humans with AI

In Superintelligence, takeoff is defined as the period between AGI and superintelligence. In this post, I will be using as takeoff's starting point the first "AI Helping Humans with AI" (in a meaningful way), or AIHHAI for short, since it will arise before we get fully general intelligence and accelerate AI progress. Here are some examples of what I have in mind for "helping humans in a meaningful way":

  • GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.
  • Another example would be if some code generation tool like Copilot makes ML engineers substantially more productive in writing ML code. Making predictions about productivity is tricky and hard to measure, but it would involve something like accepting code suggestions a decent amount, like 100x more than what engineers using Copilot currently accept.

(Sidenote: My general impression from people using Copilot is that they believe it's becoming increasingly useful, and use it on a daily basis, though it rarely auto-completes the right line of code right away. Given that we had Codex/Copilot last year, and that Sam Altman hinted at some new Codex capabilities in his ACX Q&A[2], I think we will get some impressive release for Copilot/Codex sometime this year that most engineers will want to use. (Similar to how a lot of developers are used to using Visual Studio's suite, especially intellisense.) The model I have in mind for "AI helping humans with AI" could be this one, though it will probably require 1-2x more iterations.)

Moderate Takeoff

A Moderate Takeoff is defined[1] as "one that occurs over some intermediary temporal interval, such as months or years". For AIHHAI, we can distinguish two cases:

  1. AIHHAI is developed by some AI lab working on it privately. That lab has a lead compared to the other labs, since they are working more productively using AIHHAI. Thus, they might reach superintelligence first, without allowing enough time for the rest of the world to compete.
  2. AIHHAI is made public, or quickly (think months) reproduced by others publicly or privately. In any case, some AIHHAI model is eventually made public, and there is not only one group using AIHHAI--other companies are adopting the same strategy (multipolar scenario).

For the first case, you can think of OpenAI using a new version of Copilot internally, that enables their team to quickly build another, even better version of Copilot without releasing the intermediate stage publicly. They would already have a couple of months of lead time in terms of engineering, and after using their latest version, the lead (in terms of how long it would take for a competitor to catch up using publicly available tools) would increase over time due to the compounding advantage.

For the second case, you could consider a similar scenario where they do a public release, or other labs like Google Research just build something equivalent a couple of months after. Even if is not public, the thing might be so impressive that employees talk about it to their friends or anonymously, and the news eventually gets leaked. In that regime, you get many companies possibly expanding capabilities in Code Generation, possibly by scaling models and datasets aggressively. The AI race becomes an engineering race, though we might need more scientific breakthroughs for scaling laws to continue (for more on this, see section "Why Code Generation might Plateau Instead" at the end).

It is unclear to me which case is more likely. On the one hand, the usefulness of AIHHAI would cause some rapid self-improvement of the system {Humans developing the AI + AI}, and the pace would be so quick the model would not have time to leak. On the other hand, the results being exciting enough increases the probability of the news getting leaked and that other players start (were already?) investing in similar models heavily.

Self-improving (Humans + AI)

One thing I have not seen discussed a lot is how the system "humans + AI" could have different takeoff speeds, where, for this hybrid system, takeoff would basically mean "going from {human + AIHHAI} to {human + superintelligent AI}".

Note that there are many such systems we could study, such as:

  • System 1. {All humans + all used resources, including ML models and science}
  • System 2. {All humans working on code generation + all resources they use}
  • System 3. {Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N}

The importance of closed systems

Thinking about how "closed" or "small" a system is helps us to understand its kinetics, and also has some implications regarding AI races.

Indeed, a small system using only its own output as input could independently foom, without encountering major bottlenecks. Conversely, it your system requires a lot of insights in mathematics or engineering to overcome bottlenecks, foom becomes less likely. However, a smaller model with less humans might have less resources, labor-wise.

With regard to races, if your system does not require the output of other disciplines to make progress, you could keep it private for longer. (If the system required a lot of insights, publishing preliminary results about the system could prove necessary to get the outside world to publish research relevant to your system.) In practice:

  • System 1 is a closed system. Thinking about how fast it would improve basically brings us back to "when will GDP double in a year" territory. The abstraction is not precise enough to give insights about kinetics without basically studying macro-economics.
  • System 2 is not closed, since it actually uses insights from other discipline in CS/Math and others as inputs. That said, the research in code generation directly helps the humans doing work relevant to code generation (assuming they use it).
  • System 3 would also definitely need to take research and tools from somewhere else as inputs, though you could assume that, as N gets bigger, most of insights on how to debug deep learning models would be actually fed to Copilot-N's training data via telemetry (or would be accessible via GPT-N's Q&A interface).

Among the systems presented above, System 3 could experience exponential self-improvement in complete stealth mode and is therefore worth studying in more details.

Self-improving Code Generation

I am especially interested in Sstem 3 (=="Employees working on Copilot-(K +1)/GPT-(N+1) + Copilot-K/GPT-N"), because progress in AIHHAI straightforwardly leads to productivity increases in developing AIHHAI.

Let's imagine that Copilot-N successfully auto-completes 1% of code lines, and for the sake of argument people immediately press "Tab" to move to the next line in those cases. Without thinking about the fact that the auto-completed parts would actually be the easiest parts of the developer's pre-existing workload, this would make developers ~1% more productive.

You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1), which would in turn imply 2% more "perfect auto-complete" than what we started with, etc.

Obviously, the Copilot we have right now is still pretty rudimentary. It is mostly useful for beginners to an API or language, not for doing cutting edge PyTorch development. And you could say that a lot of ML work is done outside of coding, like reading papers and building infrastructure. (More on this in my Codex Skeptic FAQ).

I agree that improvements in productivity from AI are currently marginal, though one should consider what those improvements might be for future versions, including things like question-answering GPT-N helping to debug high-level problems. It is also important to keep in mind that engineers from many different fields are currently using Copilot regularly, and could benefit more from code generation than ML engineers (think web programmers). Those engineers would in turn accelerate GDP growth, which would fasten the total amount of investments in AI.

How that would actually lead to Foom

When we will get from marginal improvements in Code Generation to some Q&A language model that helps you re-consider your transformer architecture, the gains of productivity will start to be more substantial.

Assuming we are in the scenario where one company (think OpenAI) has access to increasingly better code generation tools (that no one else has access to), and possibly also some lead in terms of useful language models to debug their tools, they might get a bigger and bigger lead in how useful their AIHHAI is.

At some point, you would be able to ask more open questions, solving harder and harder tasks, for complex things like making money in financial markets, or just setting strategy for the entire company. In a matter of months, the company would achieve extraordinary economic output, re-investing everything into AIHHAI.

Eventually, the AIHHAI would be optimizing developer productivity over some time horizon, not just completing the next line of code. When something like planning is implemented (eg. expected reward maximization), the AIHHAI might just Foom by modifying its own code to generate code better.

Why Code Generation might Plateau instead 

As Kaplan mentions in his recent talk about the implications of Scaling Laws for Code Generation, current progress is bottlenecked by:

  1. Data available. If you remove duplicates, you have about 50B tokens of Python code on Github. In comparison, GPT-3 was trained on about 300B tokens. You could possibly do data augmentation or transfer learning to bypass this problem. Though Kaplan also guesses that in AlphaCode, researchers were also bottlenecked by dataset size when scaling things up. On top of that, the Chinchilla paper shows that scaling data about as much as model size is also necessary for compute-optimal training.
  2. Writing longer programs. Assuming you have a constant error rate when writing your program token by token, you get an exponential decay in how likely your program is to solve the problem. (They tested this by asking a model to write longer programs doing essentially the same thing, and they got an exponentially worse “pass rate”.) Therefore, asking Codex to write very programs might plateau even when scaling models, at least with our current methods. (Kaplan mentions that probably a method would imply doing what humans do, aka writing bad code until it works, instead of just asking the model to write one long piece of code.)

Conclusion

  1. Moderate Takeoffs (think months) are a useful abstraction to think about scenarios between Foom and Slow Takeoffs (years, decades).
  2. When discussing Takeoff speed, it is worth noting that progress can be heterogeneous between what happens privately and publicly, especially as we get closer to superintelligence. This is especially true when considering humans will be using the AIs they developed to build AI even faster.
  3. More generally, discussion on Takeoff Speed has historically focused on whether an AI would be able to Foom, when in practice there will be an intermediate regime where the system {the humans building the AI + the AI} will self-improve, not the AI by itself.
  4. Even if this intermediate regime might, through compounding progress, lead to Foom, our current understanding of scaling laws predicts that we will soon be bottlenecked by dataset size and programs that cannot be longer than a certain size.

(Acknowledgements: thanks to the edits suggested by Justis, facilitated by Ruby.)

  1. ^

    (Superintelligence, Bostrom) Chapter 4.

  2. ^

    Sam did a Q&A for an Astral Codex Ten meetup in September 2021. I will not be linking to the post doing a recap of what he said since it was taken down from LW.

    Sam's take on Codex was summarized in the post as: a) current codex is bad compared to what they will have next b) they are making fast progress c) Codex is <1y away from having a huge impact on developers.

New Comment
14 comments, sorted by Click to highlight new comments since:
[-]RaemonΩ9230

GPT-N that you can prompt with "I am stuck with this transformer architecture trying to solve problem X". GPT-N would be AIHHAI if it answers along the lines of "In this arXiv article, they used trick Z to solve problems similar to X. Have you considered implementing it?", and using an implementation of Z would solve X >50% of the time.

I haven't finished reading the post, but I found it worthwhile for this quote alone. This is the first description I've read of how GPT-N could be transformative. (Upon reflection this was super obvious and I'm embarrassed I didn't think of it myself – I got so distracted with arguments about whether GPT could do anything 'original' that I missed this strategy)

I recently chatted with someone who described an older-researcher that had encyclopedic knowledge of AI papers (ML and otherwise), who had seen a ton of papers exploring various tools and tricks for solving ML problems. The older researcher had the experience of watching many younger researchers reinvent the same tricks over and over, and being like "that's cool, but, like, there's a better version of this idea published 20 years ago, which not only does what your algorithm does but also solves SubProblem X better than you".

So it was interesting that the world is a bit bottlenecked on only having so many 'living libraries' who are able to keep track of the deluge of research, but that this is something I'd expect GPT to be good enough at to meaningfully help.

This was one of the central points in the CAIS technical report.

Thanks for the pointer. Any specific section / sub-section I should look into?

Section 1, section 10, and section 11 cover the scenario of R&D automation via AI/ML systems that drive more productive R&D automation, resulting in a positive feedback loop, without requiring the typical "self-improving agent" -- it's the R&D system (people + AI/ML products) as a whole that is self-improving, not the individual AI/ML systems.

I highly recommend reading the entire report though. It was released in 2019 and I think it was brushed aside a little bit too easily. The past 3 years have (in my mind) provided sufficient evidence of things that CAIS directly predicted would happen, e.g. all of the AI/ML systems we've developed recently that have reached super-human competency on tasks despite a lack of generalized learning or capabilities or "self-improvement" or other recognizably "general intelligence" / "agent"-like behavior.

In 2019, we did not have Copilot, or DALL-E, or DALL-E-2, or AlphaFold, or DeepMind Ithaca, or GPT-3 -- etc.

I talk about this a little bit in my comment here.

I think you're confounding two questions:

  1. Does AIHHAI accelerate AI?
  2. If I observe AIHHAI does this update my priors towards Fast/Slow Takeoff?

 

I think it's pretty clear that AIHHAI accelerates AI development (without Copilot, I would have to write all those lines myself).

 

However, I think that observing AIHHAI should actually update your priors towards Slow Takeoff (or at least Moderate Takeoff).  One reason is because humans are inherently slower than machines, and as Amdahl reminds us if something is composed of a slow thing and a fast thing, it cannot go faster than the slow thing.  

 

The other reason is that AIHHAI should cause you to lower your belief in a threshold effect.  The original argument for Foom went something like "if computers can think like humans, and one thing humans can do is make better computers, then once computers are as smart as humans, computers will make even better computers... ergo foom."  In other words, Foom relies on the belief that there is a critical threshold which leads to an intelligence explosion.  However in a world where we observe AIHHAI, this is direct evidence against such a critical threshold, since it an example of a sub-human intelligence helping a human-level intelligence to advance AI.

 

The alternative model to Foom is something like this: "AI development is much like other economic growth, the more resources you have, the faster it goes."  AIHHAI is a specific example of such an economic input, where spending more of something helps us go faster.

Maybe, but couldn't it also mean that we just haven't reached the threshold yet? Some period of AIHHAI might be a necessary step or a catalyst toward that threshold. Encountering AIHHAI doesn't imply that there is no such foom threshold, it could also mean that we just haven't reached the threshold yet.

Well, I agree that if two worlds I had in mind were 1) foom without real AI progress beforehand 2) continuous progress, then seeing more continuous progress from increased investments should indeed update me towards 2).

The key parameter here is substitutability between capital and labor. In what sense is Human Labor the bottleneck, or is Capital the bottleneck. From the different growth trajectories and substitutability equations you can infer different growth trajectories. (For a paper / video on this see the last paragraph here).

The world in which dalle-2 happens and people start using Github Copilot looks to me like a world where human labour is substitutable by AI labour, which right now is essentially being part of Github Copilot open beta, but in the future might look like capital (paying the product or investing in building the technology yourself). My intuition right now is that big companies are more bottlenecked by ML talent than by capital (cf. the "are we in ai overhang" post explaining how much more capital could Google invest in AI).

Yes, I definitely think that there is quite a bit of overhead in how much more capital businesses could be deploying.  GPT-3 is ~$10M, whereas I think that businesses could probably do 2-3OOM more spending if they wanted to (and a Manhattan project would be more like 4OOM bigger/$100B ).

[-]TLWΩ120

You would get a 1.01 multiplier in productivity, that would make the speed of development 1.01x faster, especially the development of a Copilot-(N+1),

...assuming that Copilot-(N+1) has <1.01x the development cost as Copilot-N. I'd be interested in arguments as to why this would be the case; most programming has diminishing returns where e.g. eking out additional performance from a program costs progressively more development time.

Some arguments for why that might be the case:

-- the more useful it is, the more people use it, the more telemetry data the model has access to

-- while scaling laws do not exhibit diminishing returns from scaling, most of the development time would be on things like infrastructure, data collection and training, rather than aiming for additional performance

-- the higher the performance, the more people get interested in the field and the more research there is publicly accessible to improve performance by just implementing what is in the litterature (Note: this argument does not apply for reasons why one company could just make a lot of progress without ever sharing any of their progress.)

[-]TLW10

Interesting!

Could you please explain why your arguments don't apply to compilers?

the two first are about data, and as far as I know compilers do not use machine learning on data.

third one could technically apply to compilers, though I think in ML there is a feedback loop "impressive performance -> investments in scaling -> more research", but you cannot just throw more compute to increase compiler performance (and results are less in the mainstream, less of a public PR thing)

[-]KredanΩ120

Slow Takeoffs (years, decades).

The optimal strategy will likely substantially change depending on whether takeoffs happen over years or decades so it might make sense to conceptually separate these time scales. 

when in practice there will be an intermediate regime where the system {the humans building the AI + the AI}

It seems that we are already in this regime, of {H,AI} and probably have been since as the system {H,AI} has existed? (although with different dynamics of growth and with a growing influence of AI helping humans ) 

The system {H,AI} is currently already self-improving with human using AI to make progress in AI, but one  question is for example whether the system {H,AI} will keep improving at an accelerating rate until AI can foom autonomously, or will we see diminishing returns and then some re-acceleration? 

More specifically, interaction between AI labor (researchers+engineers) and AI capability tech  (compute, models, datasets, environments) to grow AI capability tech are the sort of models that could also be useful to make more crisp at more and check empirically. 

I agree that we are already in this regime. In the section "AI Helping Humans with AI" I tried to make it more precise at what threshold we would see substantial change in how humans interact with AI to build more advanced AI systems. Essentially, it will be when most people would use those tools most of their time (like on a daily basis) and they would observe some substantial gains of productivity (like using some oracle to make a lot of progress on a problem they are stuck on, or Copilot auto-completing a lot of their lines of code without having to manually edit.) The intuition for a threshold is "most people would need to use".

Re diminishing returns: see my other comment. In summary, if you just consider one team building AIHHAI, they would get more data and research as input from the outside world, and they would get increases in productivity from using more capable AIHHAIs. Diminishing returns could happen if: 1) scaling laws for coding AI do not hold anymore 2) we are not able to gather coding data (or do other tricks like data augmentation) at a pace high enough 3) investments for some reasons do not follow 4) there are some hardware bottlenecks in building larger and larger infrastructures. For now I have only seen evidence for 2) and this seems something that can be solved via transfer learning or new ML research.

Better modeling of those different interactions between AI labor and AI capability tech are definitely needed. For some high-level picture that mostly thinks about substitutability between capital and labor, applying to AI, I would recommend this paper (or video and slides). The equation that is the closest to self-improving {H,AI} would be this one.