Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Hard Takeoff

14 Eliezer_Yudkowsky 02 December 2008 08:44PM

Continuation ofRecursive Self-Improvement

Constant natural selection pressure, operating on the genes of the hominid line, produced improvement in brains over time that seems to have been, roughly, linear or accelerating; the operation of constant human brains on a pool of knowledge seems to have produced returns that are, very roughly, exponential or superexponential.  (Robin proposes that human progress is well-characterized as a series of exponential modes with diminishing doubling times.)

Recursive self-improvement - an AI rewriting its own cognitive algorithms - identifies the object level of the AI with a force acting on the metacognitive level; it "closes the loop" or "folds the graph in on itself".  E.g. the difference between returns on a constant investment in a bond, and reinvesting the returns into purchasing further bonds, is the difference between the equations y = f(t) = m*t, and dy/dt = f(y) = m*y whose solution is the compound interest exponential, y = e^(m*t).

When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM.  An exactly right law of diminishing returns that lets the system fly through the soft takeoff keyhole is unlikely - far more unlikely than seeing such behavior in a system with a roughly-constant underlying optimizer, like evolution improving brains, or human brains improving technology.  Our present life is no good indicator of things to come.

Or to try and compress it down to a slogan that fits on a T-Shirt - not that I'm saying this is a good idea - "Moore's Law is exponential now; it would be really odd if it stayed exponential with the improving computers doing the research."  I'm not saying you literally get dy/dt = e^y that goes to infinity after finite time - and hardware improvement is in some ways the least interesting factor here - but should we really see the same curve we do now?

RSI is the biggest, most interesting, hardest-to-analyze, sharpest break-with-the-past contributing to the notion of a "hard takeoff" aka "AI go FOOM", but it's nowhere near being the only such factor.  The advent of human intelligence was a discontinuity with the past even without RSI...

continue reading »

View more: Next