When an AGI recursively self improves, is it improving just it's software or is it improving the hardware too? Is it acquiring more hardware (e.g. by creating a botnet on the internet)? Is it making algorithmic improvements? Which improvements are responsible for the biggest order-of-magnitude increases in the AI's total power?

I'm going to offer a four-factor model of software performance. The reason why I bring this up is because I'm personally skeptical about the possibility of FOOM. Modern machine learning is just software, and a great deal of effort has already been applied to improve all four factors, so it's not obvious to me that there are still many orders of magnitude left that can be improved very quickly. Of course, it's possible that future AGI will be so exotic that this four-factor model doesn't apply. (Presumably such an AGI would run on application specific hardware, such as neuromorphic hardware.) You don't have to use this model in your answer.

My Four Factor Model of Software Performance

  1. How performant are the most critical algorithms in the software, from a pure computer science perspective? (This is what big-O notation measures. "Critical" in this context refers to the part of the software most responsible for the slowdown.)

  2. How well optimized is the software for the hardware? (This is usually about memory and cache performance. There is also an aspect of what CPU instructions to use, which modern programmers usually leave to the compiler to manage. In the old days, game devs would program the most critical parts in assembly to maximize performance. Vectorization also falls in this category.)

  3. How well optimized is the hardware for single threaded performance? (Modern CPUs have already hit a limit here, although significant improvements can still be made for application specific hardware.)

  4. How much parallel processing is possible and available? (This is limited by algorithms, software architecture and hardware. In practice, parallelism gives a fraction of the benefit that it should, due to the difficulty and complexity involved. Amdahl's Law puts a hard limit on the benefits of parallelism. There is also a speed-of-light limitation, but this only matters if the system is geographically distributed i.e. a botnet.)

Again, the question is, what is being improved in recursive self improvement?

New Answer
New Comment

3 Answers sorted by

Thomas Kwa

60

I think in the FOOM story, most of the fast improvement is factor 1. The world's best mathematicians run on similar hardware to mediocre mathematicians, and can prove theorems thousands of times faster.

JBlack

40

When an AGI recursively self improves, is it improving just it's software or is it improving the hardware too? Is it acquiring more hardware (e.g. by creating a botnet on the internet)? Is it making algorithmic improvements? Which improvements are responsible for the biggest order-of-magnitude increases in the AI's total power?

Any or all of the above. I do expect better software to be the first, and probably most important step. There are lots of possible scenarios, but the most dangerous seem to be those where AI can greatly improve upon software to make much better use of existing hardware.

We are almost certainly not using anywhere near the best possible algorithms to turn computing power into intelligent behaviour, probably by many orders of magnitude in the sense that the same hardware could achieve the same things with vastly less computing requirement.

This is an area where intelligence only slightly beyond the best human capability might enable enormous yet silent advances in capability with very few physical constraints on how fast the transition can proceed. Regarding improvements, it's not even meaningful to talk about "orders of magnitude". A new type of design might achieve things that were impossible with the previous structure no matter how much extra compute we threw at it.

It also doesn't have to be self-improvement. That's just one of the stories that's easier to explain. A narrowly superintelligent tool AI that devises for the humans a better way to design AIs could end up just as disastrous. Likewise a weak agent-like superintelligence that doesn't have self-preservation as major goal, but is fine with designing a strong ASI that will supersede it.

Once ASI is well past the capability of any human, what it can do is by definition not knowable. For an agent-like system with instrumental self-preservation, removing its dependence upon existing hardware seems very likely. There are many paths that even humans can devise that would achieve that. This step seems probably slower, but again this isn't knowable to us.

Creating more and better hardware also seems obvious, as we almost certainly have not designed the best possible hardware either. What form the better hardware takes is also not knowable, but there are lots of candidates that we know about and certainly others that we don't. We do know that even with existing types of computation hardware we are nowhere near physical limits to total computing capability, just economic ones. An extra ten orders of magnitude in computing capability seems like a reasonable lower bound on what could be achieved.

avturchin

30

It could happen of several levels, see my post about it

3 comments, sorted by Click to highlight new comments since:

0. How accurate is it? 1-4 all seem to be about how fast we can get the answer to a problem that a certain algorithm would give, including by using a new algorithm to get to the same answer. But it's also (maybe primarily more) important to get a better answer.

Yes, and that gets into another aspect of my skepticism about AI risk. More thinking is not necessarily better thinking.

EDIT: I just realized that I'm the one who is smuggling in this assumption that RSI refers to speed improvements. So I guess the deeper question is, where does more and/or better come from? And, if we're talking about better, how does the AGI know what better is?

I didn't address it in my main response, but Amdahl's Law is far from putting a hard limit on anything. There are enormous numbers of cases where we thought there was a hard limit due to Amdahl's Law, but it turned out that the part that we thought was irreducibly serial turned out to be massively parallelizable with a different approach.