Followup toLife's Story Continues, Surprised by Brains, Cascades, Cycles, Insight, Recursion, Magic, Engelbart: Insufficiently Recursive, Total Nano Domination

I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability - "AI go FOOM".  Just to be clear on the claim, "fast" means on a timescale of weeks or hours rather than years or decades; and "FOOM" means way the hell smarter than anything else around, capable of delivering in short time periods technological advancements that would take humans decades, probably including full-scale molecular nanotechnology (that it gets by e.g. ordering custom proteins over the Internet with 72-hour turnaround time).  Not, "ooh, it's a little Einstein but it doesn't have any robot hands, how cute".

Most people who object to this scenario, object to the "fast" part. Robin Hanson objected to the "local" part.  I'll try to handle both, though not all in one shot today.

We are setting forth to analyze the developmental velocity of an Artificial Intelligence.  We'll break down this velocity into optimization slope, optimization resources, and optimization efficiency.  We'll need to understand cascades, cycles, insight and recursion; and we'll stratify our recursive levels into the metacognitive, cognitive, metaknowledge, knowledge, and object level.

Quick review:

  • "Optimization slope" is the goodness and number of opportunities in the volume of solution space you're currently exploring, on whatever your problem is;
  • "Optimization resources" is how much computing power, sensory bandwidth, trials, etc. you have available to explore opportunities;
  • "Optimization efficiency" is how well you use your resources.  This will be determined by the goodness of your current mind design - the point in mind design space that is your current self - along with its knowledge and metaknowledge (see below).

Optimizing yourself is a special case, but it's one we're about to spend a lot of time talking about.

By the time any mind solves some kind of actual problem, there's actually been a huge causal lattice of optimizations applied - for example, humans brain evolved, and then humans developed the idea of science, and then applied the idea of science to generate knowledge about gravity, and then you use this knowledge of gravity to finally design a damn bridge or something.

So I shall stratify this causality into levels - the boundaries being semi-arbitrary, but you've got to draw them somewhere:

  • "Metacognitive" is the optimization that builds the brain - in the case of a human, natural selection; in the case of an AI, either human programmers or, after some point, the AI itself.
  • "Cognitive", in humans, is the labor performed by your neural circuitry, algorithms that consume large amounts of computing power but are mostly opaque to you.  You know what you're seeing, but you don't know how the visual cortex works.  The Root of All Failure in AI is to underestimate those algorithms because you can't see them...  In an AI, the lines between procedural and declarative knowledge are theoretically blurred, but in practice it's often possible to distinguish cognitive algorithms and cognitive content.
  • "Metaknowledge":  Discoveries about how to discover, "Science" being an archetypal example, "Math" being another.  You can think of these as reflective cognitive content (knowledge about how to think).
  • "Knowledge":  Knowing how gravity works.
  • "Object level":  Specific actual problems like building a bridge or something.

I am arguing that an AI's developmental velocity will not be smooth; the following are some classes of phenomena that might lead to non-smoothness.  First, a couple of points that weren't raised earlier:

  • Roughness:  A search space can be naturally rough - have unevenly distributed slope. With constant optimization pressure, you could go through a long phase where improvements are easy, then hit a new volume of the search space where improvements are tough.  Or vice versa.  Call this factor roughness.
  • Resource overhangs:  Rather than resources growing incrementally by reinvestment, there's a big bucket o' resources behind a locked door, and once you unlock the door you can walk in and take them all.

And these other factors previously covered:

  • Cascades are when one development leads the way to another - for example, once you discover gravity, you might find it easier to understand a coiled spring.
  • Cycles are feedback loops where a process's output becomes its input on the next round.  As the classic example of a fission chain reaction illustrates, a cycle whose underlying processes are continuous, may show qualitative changes of surface behavior - a threshold of criticality - the difference between each neutron leading to the emission of 0.9994 additional neutrons versus each neutron leading to the emission of 1.0006 additional neutrons.  k is the effective neutron multiplication factor and I will use it metaphorically.
  • Insights are items of knowledge that tremendously decrease the cost of solving a wide range of problems - for example, once you have the calculus insight, a whole range of physics problems become a whole lot easier to solve.  Insights let you fly through, or teleport through, the solution space, rather than searching it by hand - that is, "insight" represents knowledge about the structure of the search space itself.

and finally,

  • Recursion is the sort of thing that happens when you hand the AI the object-level problem of "redesign your own cognitive algorithms".

Suppose I go to an AI programmer and say, "Please write me a program that plays chess."  The programmer will tackle this using their existing knowledge and insight in the domain of chess and search trees; they will apply any metaknowledge they have about how to solve programming problems or AI problems; they will process this knowledge using the deep algorithms of their neural circuitry; and this neutral circuitry will have been designed (or rather its wiring algorithm designed) by natural selection.

If you go to a sufficiently sophisticated AI - more sophisticated than any that currently exists - and say, "write me a chess-playing program", the same thing might happen:  The AI would use its knowledge, metaknowledge, and existing cognitive algorithms.  Only the AI's metacognitive level would be, not natural selection, but the object level of the programmer who wrote the AI, using their knowledge and insight etc.

Now suppose that instead you hand the AI the problem, "Write a better algorithm than X for storing, associating to, and retrieving memories".  At first glance this may appear to be just another object-level problem that the AI solves using its current knowledge, metaknowledge, and cognitive algorithms.  And indeed, in one sense it should be just another object-level problem.  But it so happens that the AI itself uses algorithm X to store associative memories, so if the AI can improve on this algorithm, it can rewrite its code to use the new algorithm X+1.

This means that the AI's metacognitive level - the optimization process responsible for structuring the AI's cognitive algorithms in the first place - has now collapsed to identity with the AI's object level.

For some odd reason, I run into a lot of people who vigorously deny that this phenomenon is at all novel; they say, "Oh, humanity is already self-improving, humanity is already going through a FOOM, humanity is already in a Singularity" etc. etc.

Now to me, it seems clear that - at this point in the game, in advance of the observation - it is pragmatically worth drawing a distinction between inventing agriculture and using that to support more professionalized inventors, versus directly rewriting your own source code in RAM.  Before you can even argue about whether the two phenomena are likely to be similar in practice, you need to accept that they are, in fact, two different things to be argued about.

And I do expect them to be very distinct in practice.  Inventing science is not rewriting your neural circuitry.  There is a tendency to completely overlook the power of brain algorithms, because they are invisible to introspection.  It took a long time historically for people to realize that there was such a thing as a cognitive algorithm that could underlie thinking.  And then, once you point out that cognitive algorithms exist, there is a tendency to tremendously underestimate them, because you don't know the specific details of how your hippocampus is storing memories well or poorly - you don't know how it could be improved, or what difference a slight degradation could make.  You can't draw detailed causal links between the wiring of your neural circuitry, and your performance on real-world problems.  All you can see is the knowledge and the metaknowledge, and that's where all your causal links go; that's all that's visibly important.

To see the brain circuitry vary, you've got to look at a chimpanzee, basically.  Which is not something that most humans spend a lot of time doing, because chimpanzees can't play our games.

You can also see the tremendous overlooked power of the brain circuitry by observing what happens when people set out to program what looks like "knowledge" into Good-Old-Fashioned AIs, semantic nets and such.  Roughly, nothing happens.  Well, research papers happen.  But no actual intelligence happens.  Without those opaque, overlooked, invisible brain algorithms, there is no real knowledge - only a tape recorder playing back human words.  If you have a small amount of fake knowledge, it doesn't do anything, and if you have a huge amount of fake knowledge programmed in at huge expense, it still doesn't do anything.

So the cognitive level - in humans, the level of neural circuitry and neural algorithms - is a level of tremendous but invisible power. The difficulty of penetrating this invisibility and creating a real cognitive level is what stops modern-day humans from creating AI.  (Not that an AI's cognitive level would be made of neurons or anything equivalent to neurons; it would just do cognitive labor on the same level of organization.  Planes don't flap their wings, but they have to produce lift somehow.)

Recursion that can rewrite the cognitive level is worth distinguishing.

But to some, having a term so narrow as to refer to an AI rewriting its own source code, and not to humans inventing farming, seems hardly open, hardly embracing, hardly communal; for we all know that to say two things are similar shows greater enlightenment than saying that they are different.  Or maybe it's as simple as identifying "recursive self-improvement" as a term with positive affective valence, so you figure out a way to apply that term to humanity, and then you get a nice dose of warm fuzzies.  Anyway.

So what happens when you start rewriting cognitive algorithms?

Well, we do have one well-known historical case of an optimization process writing cognitive algorithms to do further optimization; this is the case of natural selection, our alien god.

Natural selection seems to have produced a pretty smooth trajectory of more sophisticated brains over the course of hundreds of millions of years.  That gives us our first data point, with these characteristics:

  • Natural selection on sexual multicellular eukaryotic life can probably be treated as, to first order, an optimizer of roughly constant efficiency and constant resources.
  • Natural selection does not have anything akin to insights.  It does sometimes stumble over adaptations that prove to be surprisingly reusable outside the context for which they were adapted, but it doesn't fly through the search space like a human.  Natural selection is just searching the immediate neighborhood of its present point in the solution space, over and over and over.
  • Natural selection does have cascades; adaptations open up the way for further adaptations.

So - if you're navigating the search space via the ridiculously stupid and inefficient method of looking at the neighbors of the current point, without insight - with constant optimization pressure - then...

Well, I've heard it claimed that the evolution of biological brains has accelerated over time, and I've also heard that claim challenged. If there's actually been an acceleration, I would tend to attribute that to the "adaptations open up the way for further adaptations" phenomenon - the more brain genes you have, the more chances for a mutation to produce a new brain gene.  (Or, more complexly: the more organismal error-correcting mechanisms the brain has, the more likely a mutation is to produce something useful rather than fatal.)  In the case of hominids in particular over the last few million years, we may also have been experiencing accelerated selection on brain proteins, per se - which I would attribute to sexual selection, or brain variance accounting for a greater proportion of total fitness variance.

Anyway, what we definitely do not see under these conditions is logarithmic or decelerating progress.  It did not take ten times as long to go from H. erectus to H. sapiens as from H. habilis to H. erectus. Hominid evolution did not take eight hundred million years of additional time, after evolution immediately produced Australopithecus-level brains in just a few million years after the invention of neurons themselves.

And another, similar observation: human intelligence does not require a hundred times as much computing power as chimpanzee intelligence.  Human brains are merely three times too large, and our prefrontal cortices six times too large, for a primate with our body size.

Or again:  It does not seem to require 1000 times as many genes to build a human brain as to build a chimpanzee brain, even though human brains can build toys that are a thousand times as neat.

Why is this important?  Because it shows that with constant optimization pressure from natural selection and no intelligent insight, there were no diminishing returns to a search for better brain designs up to at least the human level.  There were probably accelerating returns (with a low acceleration factor).  There are no visible speedbumps, so far as I know.

But all this is to say only of natural selection, which is not recursive.

If you have an investment whose output is not coupled to its input - say, you have a bond, and the bond pays you a certain amount of interest every year, and you spend the interest every year - then this will tend to return you a linear amount of money over time.  After one year, you've received $10; after 2 years, $20; after 3 years, $30.

Now suppose you change the qualitative physics of the investment, by coupling the output pipe to the input pipe.  Whenever you get an interest payment, you invest it in more bonds.  Now your returns over time will follow the curve of compound interest, which is exponential.  (Please note:  Not all accelerating processes are smoothly exponential.  But this one happens to be.)

The first process grows at a rate that is linear over time; the second process grows at a rate that is linear in its cumulative return so far.

The too-obvious mathematical idiom to describe the impact of recursion is replacing an equation

y = f(t)

with

dy/dt = f(y)

For example, in the case above, reinvesting our returns transformed the linearly growing

y = m*t

into

y' = m*y

whose solution is the exponentially growing

y = e^(m*t)

Now... I do not think you can really solve equations like this to get anything like a description of a self-improving AI.

But it's the obvious reason why I don't expect the future to be a continuation of past trends.  The future contains a feedback loop that the past does not.

As a different Eliezer Yudkowsky wrote, very long ago:

"If computing power doubles every eighteen months, what happens when computers are doing the research?"

And this sounds horrifyingly naive to my present ears, because that's not really how it works at all - but still, it illustrates the idea of "the future contains a feedback loop that the past does not".

History up until this point was a long story about natural selection producing humans, and then, after humans hit a certain threshold, humans starting to rapidly produce knowledge and metaknowledge that could - among other things - feed more humans and support more of them in lives of professional specialization.

To a first approximation, natural selection held still during human cultural development.  Even if Gregory Clark's crazy ideas are crazy enough to be true - i.e., some human populations evolved lower discount rates and more industrious work habits over the course of just a few hundred years from 1200 to 1800 - that's just tweaking a few relatively small parameters; it is not the same as developing new complex adaptations with lots of interdependent parts.  It's not a chimp-human type gap.

So then, with human cognition remaining more or less constant, we found that knowledge feeds off knowledge with k > 1 - given a background of roughly constant cognitive algorithms at the human level.  We discovered major chunks of metaknowledge, like Science and the notion of Professional Specialization, that changed the exponents of our progress; having lots more humans around, due to e.g. the object-level innovation of farming, may have have also played a role.  Progress in any one area tended to be choppy, with large insights leaping forward, followed by a lot of slow incremental development.

With history to date, we've got a series of integrals looking something like this:

Metacognitive = natural selection, optimization efficiency/resources roughly constant

Cognitive = Human intelligence = integral of evolutionary optimization velocity over a few hundred million years, then roughly constant over the last ten thousand years

Metaknowledge = Professional Specialization, Science, etc. = integral over cognition we did about procedures to follow in thinking, where metaknowledge can also feed on itself, there were major insights and cascades, etc.

Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition+metaknowledge(current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process

Object level = stuff we actually went out and did = integral of cognition+metaknowledge+knowledge(current solutions); over a short timescale this tends to be smoothly exponential to the degree that the people involved understand the idea of investments competing on the basis of interest rate, but over medium-range timescales the exponent varies, and on a long range the exponent seems to increase

If you were to summarize that in one breath, it would be, "with constant natural selection pushing on brains, progress was linear or mildly accelerating; with constant brains pushing on metaknowledge and knowledge and object-level progress feeding back to metaknowledge and optimization resources, progress was exponential or mildly superexponential".

Now fold back the object level so that it becomes the metacognitive level.

And note that we're doing this through a chain of differential equations, not just one; it's the final output at the object level, after all those integrals, that becomes the velocity of metacognition.

You should get...

...very fast progress?  Well, no, not necessarily.  You can also get nearly zero progress.

If you're a recursified optimizing compiler, you rewrite yourself just once, get a single boost in speed (like 50% or something), and then never improve yourself any further, ever again.

If you're EURISKO, you manage to modify some of your metaheuristics, and the metaheuristics work noticeably better, and they even manage to make a few further modifications to themselves, but then the whole process runs out of steam and flatlines.

It was human intelligence that produced these artifacts to begin with.  Their own optimization power is far short of human - so incredibly weak that, after they push themselves along a little, they can't push any further.  Worse, their optimization at any given level is characterized by a limited number of opportunities, which once used up are gone - extremely sharp diminishing returns.

When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up.  You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.

The observed history of optimization to date makes this even more unlikely.  I don't see any reasonable way that you can have constant evolution produce human intelligence on the observed historical trajectory (linear or accelerating), and constant human intelligence produce science and technology on the observed historical trajectory (exponential or superexponential), and fold that in on itself, and get out something whose rate of progress is in any sense anthropomorphic.  From our perspective it should either flatline or FOOM.

When you first build an AI, it's a baby - if it had to improve itself, it would almost immediately flatline.  So you push it along using your own cognition, metaknowledge, and knowledge - not getting any benefit of recursion in doing so, just the usual human idiom of knowledge feeding upon itself and insights cascading into insights.  Eventually the AI becomes sophisticated enough to start improving itself, not just small improvements, but improvements large enough to cascade into other improvements.  (Though right now, due to lack of human insight, what happens when modern researchers push on their AGI design is mainly nothing.)  And then you get what I. J. Good called an "intelligence explosion".

I even want to say that the functions and curves being such as to allow hitting the soft takeoff keyhole, is ruled out by observed history to date.  But there are small conceivable loopholes, like "maybe all the curves change drastically and completely as soon as we get past the part we know about in order to give us exactly the right anthropomorphic final outcome", or "maybe the trajectory for insightful optimization of intelligence has a law of diminishing returns where blind evolution gets accelerating returns".

There's other factors contributing to hard takeoff, like the existence of hardware overhang in the form of the poorly defended Internet and fast serial computers.  There's more than one possible species of AI we could see, given this whole analysis.  I haven't yet touched on the issue of localization (though the basic issue is obvious: the initial recursive cascade of an intelligence explosion can't race through human brains because human brains are not modifiable until the AI is already superintelligent).

But today's post is already too long, so I'd best continue tomorrow.

Post scriptum:  It occurred to me just after writing this that I'd been victim of a cached Kurzweil thought in speaking of the knowledge level as "exponential".  Object-level resources are exponential in human history because of physical cycles of reinvestment.  If you try defining knowledge as productivity per worker, I expect that's exponential too (or productivity growth would be unnoticeable by now as a component in economic progress).  I wouldn't be surprised to find that published journal articles are growing exponentially.  But I'm not quite sure that it makes sense to say humanity has learned as much since 1938 as in all earlier human history... though I'm quite willing to believe we produced more goods... then again we surely learned more since 1500 than in all the time before.  Anyway, human knowledge being "exponential" is a more complicated issue than I made it out to be.  But human object level is more clearly exponential or superexponential.

New Comment
54 comments, sorted by Click to highlight new comments since:
For some odd reason, I run into a lot of people who vigorously deny that this phenomenon is at all novel; they say, "Oh, humanity is already self-improving, humanity is already going through a FOOM, humanity is already in a Singularity" etc. etc.

People like me. I don't see much of a counter-argument in this post - or at least not a coherent one.

The future contains a feedback loop that the past does not.

It seems like a denial of sexual selection to me. Brainpower went into making new brains historically - via sexual selection. Feedback from the previous generation of brains into the next generation has taken place historically.

It's also a denial of cultural evolution - which is even more obvious. The past contains several decades of Moore's law - the product of a self-improving computer industry. It seems impossible to deny that computers help to design the next generation of computers. They are doing more of the work in each passing generation.

Feedback from the current generation of thinking equipment into the next one is an ancient phenomenon. Even directed mutations and intelligent design are quite old news now.

One of the few remaining dramatic future developments will occur when humans drop out of the loop - allowing it to iterate at more like full speed.

the initial recursive cascade of an intelligence explosion can't race through human brains because human brains are not modifiable until the AI is already superintelligent

I am extremely sceptical about whether we will see much modification of human brains by superintelligent agents. Once we have superintelligence, human brains will go out of fashion the way the horse-and-cart did.

Brains will not suddenly become an attractive platform for future development with the advent of superintelligence - rather they will become even more evidently obsolete junk.

The problem, as I see it, is that you can't take bits out of a running piece of software and replace them with other bits, and have them still work, unless said piece of software is trivial.

The likelihood that you could change the object retrieval mechanism of your AI and have it still be the "same" AI, or even a reasonably functional one, is very low, unless the entire system was deliberately written in an incredibly modular way. And incredibly modular systems are not efficient, which makes it unlikely that any early AI will be written in that manner.

The human brain is a mass of interconnecting systems, all tied together in a mish-mash of complexity. You couldn't upgrade any one part of it by finding a faster replacement for any one section of it. Attempting to perform brain surgery on yourself is going to be a slow, painstaking process, leaving you with far more dead AIs than live ones.

And, of course, as the AI picks the low-fruit of improvements, it'll start running into harder problems to solve that it may well find itself needing more effort and attempts to solve.

Which doesn't mean it isn't possible - it just means that it's going to be a slow takeoff, not a fast one.

And incredibly modular systems are not efficient, which makes it unlikely that any early AI will be written in that manner.

Whole-program compilation is all about collapsing modularity into an efficient spaghetti mess, once modularity has served its purpose with information-hiding and static checks.

The problem, as I see it, is that you can't take bits out of a running piece of software and replace them with other bits, and have them still work, unless said piece of software is trivial.

The capacity to do in-place updates of running software components dates back to at least the first LISP systems. Call it 1955? Modern day telephone switches and network routers are all built with the capability of doing hot upgrades, or they wouldn't be able to reach the the level of uptime required (if you require 99.9999% uptime, going down for 30 seconds for an upgrade ruins your numbers for ten years). Additionally, those systems require that every component be independently crashable and restartable, for reliability purposes.

The problem, as I see it, is that you can't take bits out of a running piece of software and replace them with other bits, and have them still work, unless said piece of software is trivial.

The AI could probably do a reboot if it needed to. For that matter, the computer you're writing this on probably has modular device drivers that can be replaced without the machine needing a reboot.

"Recursion that can rewrite the cognitive level is worth distinguishing."

Eliezer, would a human that modifies the genes that control how his brain is built qualify as the same class of recursion (but with a longer cycle-time), or is it not quite the same?

Andrew, we're not talking about the equivalent of a human studying neuroscience by groping in the dark. If an AI truly, truly groks every line of its own code, it can pretty much do what it wants with it. No need for trial and error when you have full visibility and understanding of every neuron in your head.

How, you ask? What do such recursive algorithms look like? Mere details; the code monkeys can worry about all that stuff!

Inventing science is not rewriting your neural circuitry. [...] To see the brain circuitry vary, you've got to look at a chimpanzee, basically.

You should be considering augmented humans as single cybernetic entities when you write like this.

Augmented humans are a bunch of sensors linked to a bunch of actuators via a load of computing power - it's just that some of the processing is done by machinery, not neurons.

Then there is substantial variation in the capabilities of the resulting entities - depending on to what extent they augment themselves.

Just looking at the bit with the neurons totally misses out the section where all the action and change is taking place!

To then claim that there's no self-improvement action happening yet is broadly correct - from that blinkered perspective.

However, the reality is that humans don't think with just their bare brains. They soak them in culture and then augment them with machinery. Consider those proceses, and you should see the self-improvement that has taken place so far.

I think you have a tendency to overlook our lack of knowledge of how the brain works. You talk of constant brain circuitry, when people add new hippocampal cells through their life. We also expand the brain areas devoted to fingers if we are born blind and use braille.

We don't know how else the brain rewires itself. In some sense all knowledge is wiring in the brain... I mean what else is it going to be. This is all invisible to us, and may throw a spanner in the works of any intelligence trying to improve a moving target.

Depending on which abstractions you emphasize, you can describe a new thing as something completely new under the sun, or as yet another example of something familiar. So the issue is which abstractions make the most sense to use. We have seen cases before where when one growth via some growth channel opened up more growth channels, to further enable growth. So the question is how similar those situations are to this situation, where and AI getting smarter allows an AI to change its architecture in more and better ways. Which is another way of asking which abstractions are most relevant.

The rapidity of evolution from chimp to human is remarkable, but you can infer what you're trying to infer only if you believe evolution reliably produces steadily more intelligent creatures. It might be that conditions temporarily favored intelligence, leading to humans; our rapid rise is then explained by the anthropic principle, not by universal evolutionary dynamics.

Knowledge = all that actual science, engineering, and general knowledge accumulation we did = integral of cognition+metaknowledge(current knowledge) over time, where knowledge feeds upon itself in what seems to be a roughly exponential process
Knowledge feeds on itself only when it is continually spread out over new domains. If you keep trying to learn more about the same domain - say, to cure cancer, or make faster computer chips - you get logarithmic returns, requiring an exponential increase in resources to maintain constant output. (IIRC it has required exponentially-increasing capital investments to keep Moore's Law going; the money will run out before the science does.) Rescher wrote about this in the 1970s and 1980s.

This is important because it says that, if an AI keeps trying to learn how to improve itself, it will get only logarithmic returns.

When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.
This is the most important and controversial claim, so I'd like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.

How clear is the distinction between knowledge and intelligence really? The whole genius of the digitial computer is that programs are data. When a human observes someone else doing something, they can copy that action: data seems like programs there too.

And yet "cognitive" is listed as several levels above "knowledge" in the above post, and yesterday CYC was mocked as being not much better than a dictionary. Maybe cognition and knowledge are not so separate, but two perspectives on the same thing.

One difference between human cognition and human knowledge is that knowledge can be copied between humans and cognition cannot. That's not (necessarily) true for AIs.

I think it will be a more intuitive abstraction to talk about an AI that designs the next (at the object level) AI using itself as a template. None of us has conceptual experience with directly modifying our own brains. All of us have built something. We also have good natural language tools to deal with things like 'grandchildren' and 'inheriting the knowledge base'. Modifying code in place would be a more likely reality, but I don't think it's relevant to the argument, and just makes the whole thing harder to think about.

Yes we're talking about a self modifying AI design, but each iteration is in many ways a new entity.

This also puts the sexual selection as brains designing brains into perspective. A proto-woman could only choose among the men available to her, which were the result of natural selection "searching the immediate neighborhood of" the previous generation. She can nudge evolution, but she can't use her insight to make a substantially better mate. ( I think this is 'metacognitive' vs. 'object level'

Even to the degree that there was a feedback loop, it didn't just have 'smart' as its goal. Brains designing brains doesn't get you foom if they're designing/selecting for 'funny' or 'apparently faithful but not really'.

[-]Savage-10

"If computing power doubles every eighteen months, what happens when computers are doing the research?"

And this sounds horrifyingly naive to my present ears

TMOL was freaking brilliant. This post was awesome. It blew me away. Can't wait to see the follow up.

I know this whole comment was kind of vacuous, but yeah. Wow. I don't even have to tell you that you make more sense than virtually everyone on the planet.

I'd like to focus on the example offered: "Write a better algorithm than X for storing, associating to, and retrieving memories." Is this a well defined task?

Wouldn't we want to ask, better by what measure? Is there some well defined metric for this task? And what if it is better at storing, but worse at retrieving, or vice versa? What if it gets better quality answers but takes longer - is that an improvement or not? And what if the AI looks at the algorithm, works on it for a while, and admits in the end that it can't see a way to really make it better? I'm a professional software writer and there are not many standard algorithms where I'd expect to be able to come up with a significant improvement.

As I puzzle over these issues, I am confused whether this example is supposed to represent a task where we (or really, the AI, for I think we are to imagine that it is setting itself the task) would know exactly how to quantify "better"? Or is this associating-memory functionality supposed to be some of the deep dark hidden algorithmic depths of our mind, and the hard part will be figuring out how AI-enabling associative memory works in the first place? And then once we have done so, would improving it be a basically mechanical task which any moderately smart AI or human would certainly be able to accomplish? Or would it require a super-genius AI which can figure out improvements on almost any human-designed algorithm?

And does this example really lead to a recursive cycle of self-improvement, or is it like the optimizing compiler, which can speed up its database access operations but that doesn't make it fundamentally any smarter?

From a practical point of view, a "hard takeoff" would seem to be defined by self-improvement and expansion of control at a rate too fast for humans to cope with. As an example of this, it is often put forward as obvious that the AI would invent molecular nanotechnology in a matter of hours.

Yet there is no reason to think it's even possible to improve molecular simulation, required to search in molecular process-space, much beyond our current algorithms, which on any near-term hardware are nowhere near up to the task. The only explanation is that you are hypothesizing rather incredible increases in abilities such as this without any reason to even think that they are possible.

It's this sort of leap that makes the scenario difficult to believe. Too many miracles seem necessary.

Personally I can entertain the possibility of a "takeoff" (though it is no sure thing that it is possible), but the level of optimization required for a hard takeoff seems unreasonable. It is a lengthy process just to compile a large software project (a trivial transformation). There are limits to what a particular computer can do.

Hal: At some point, you've improved a computer program. You had to decide, somehow, what tradeoffs to make, on your own. We should assume that a superhuman AI will be at least as good at improving programs as we are.

I can't think of any programs of broad scope that I would call unimprovable. (The AI might not be able to improve this algorithm this iteration, but if it really declares itself perfectly optimized, I'd expect we would declare it broken. In fact that sounds like the EURISKO. An AGI should a least keep trying.)

Also: Any process that it knows how to do, that it has learned, it can implement in its own code, so it does not have to 'think things out' with it's high-level thinking algorithms. This is repeatable for everything it learns. (We can't do the same thing to create an AI because we don't have access to our algorithms or really even our memories. If an AI can learn to recognize breeds of dogs, then it can trace its own thoughts to determine by what process it does that. Since the learning algorithm probably isn't perfectly optimized to learn how to recognize dogs, the learned process it is using is probably not perfectly efficient.)

The metacognitive level becoming part of the object level lets you turn knowledge and metaknowledge directly into cognitive improvements. For every piece of knowledge, including knowledge about how to program.

Phil: Anthropic pressures should by default be expected to be spread uniformly through our evolutionary history accelerating the evolutionary and pre-evolutionary record of events leading to us rather than merely accelerating the last stretch.

Exponential inputs into computer chip manufacture seem to produce exponential returns with a doubling time significantly less than that for the inputs, implying increasing returns per unit input, at least if one measures in terms of feature number. Obviously returns are exponentially diminishing if one measures in time to finish some particular calculation. Returns will more interestingly be diminishing per unit labor in terms of hardware design effor per unit of depth to which a NP and exponential complexity class problems can be calculated, e.g. the number of moves ahead a chess program can look. OTOH, it bizarrely appears to be the case that over a large range of chess ranks, human players seem to gain effective chess skill measured by chess rank with roughly linear training while chess programs gain it via exponential speed-up.

Society seems to in aggregate get constant zero returns on efforts to cure cancer, though one can't rule out exponential returns starting from zero. OTOH, this seems consistent with the general inefficacy of medicine in aggregate as shown by the Rand study, which doesn't overturn the individual impacts, as shown by FDA testing, of many individual medical procedures. Life expectancy in the US has grown linearly while GDP per capita has grown exponentially, but among nations in the modern world life expectancy clearly has a different relationship to income, not linear, not logarithmic, more plausibly asymptotic moving towards something in the early 80s.

I'm glad that you consider the claim about turning object level knowledge metacognitive to be the most important and controvercial claim. This seems like a much more substantial and precise criticism of Eliezer's position than anything Robin has made so far. It would be very interesting to see you and Eliezer discuss evidence for or against sufficient negative feedback mechanisms, Eliezer's "just the right law of diminishing retunrs" existing.

I'm sure you're aware of Schmidhuber's forays into this area with his Gödel Machine. Doesn't this blur the boundaries between the meta-cognitive and cognitive?

For the extended 2009 "Godel Machine" paper, see ref 7 here.

James Andrix: In fact that sounds like the EURISKO.

Could you elaborate? My understanding is that Eurisko never gave up, but Lenat got bored of babysitting it.

(Compound reply from Eliezer.)

Eliezer: When you fold a complicated, choppy, cascade-y chain of differential equations in on itself via recursion, it should either flatline or blow up. You would need exactly the right law of diminishing returns to fly through the extremely narrow soft takeoff keyhole.

Goetz: This is the most important and controversial claim, so I'd like to see it better-supported. I understand the intuition; but it is convincing as an intuition only if you suppose there are no negative feedback mechanisms anywhere in the whole process, which seems unlikely.

Can you give a plausible example of a negative feedback mechanism as such, apart from a law of diminishing returns that would be (nearly) ruled out by historical evidence already available?

I suspect that human economic growth would naturally tend to be faster and somewhat more superexponential, if it were not for the negative feedback mechanism of governments and bureaucracies with poor incentives, that both expand and hinder whenever times are sufficiently good that no one is objecting strongly enough to stop it; when "economic growth" is not the issue of top concern to everyone, all sorts of actions will be taken to hinder economic growth; when the company is not in immediate danger of collapsing, the bureaucracies will add on paperwork; and universities just go on adding paperwork indefinitely. So there are negative feedback mechanisms built into the human economic growth curve, but an AI wouldn't have them because they basically derive from us being stupid and having conflicting incentives.

What would be a plausible negative feedback mechanism - as apart from a law of diminishing returns? Why wouldn't the AI just stomp on the mechanism?

Hanson: Depending on which abstractions you emphasize, you can describe a new thing as something completely new under the sun, or as yet another example of something familiar. So the issue is which abstractions make the most sense to use. We have seen cases before where when one growth via some growth channel opened up more growth channels, to further enable growth. So the question is how similar those situations are to this situation, where an AI getting smarter allows an AI to change its architecture in more and better ways. Which is another way of asking which abstractions are most relevant.

Well, the whole post above is just putting specific details on that old claim, "Natural selection producing humans and humans producing technology can't be extrapolated to an AI insightfully modifying its low-level brain algorithms, because the latter case contains a feedback loop of an importantly different type; it's like trying to extrapolate a bird flying outside the atmosphere or extrapolating the temperature/compression law of a gas past the point where the gas becomes a black hole."

If you just pick an abstraction that isn't detailed enough to talk about the putative feedback loop, and then insist on extrapolating out the old trends from the absence of the feedback loop, I would consider this a weak response.

Pearson: I think you have a tendency to overlook our lack of knowledge of how the brain works. You talk of constant brain circuitry, when people add new hippocampal cells through their life. We also expand the brain areas devoted to fingers if we are born blind and use braille.

Pearson, "constant brains" means "brains with constant adaptation-algorithms, such as an adaptation-algorithm for rewiring via reinforcement" not "brains with constant synaptic networks". I think a bit of interpretive charity would have been in order here.

Finney: I'd like to focus on the example offered: "Write a better algorithm than X for storing, associating to, and retrieving memories." Is this a well defined task? Wouldn't we want to ask, better by what measure? Is there some well defined metric for this task?

Hal, if this is taking place inside a reasonably sophisticated Friendly AI, then I'd expect there to be something akin to an internal economy of the AI with expected utilons as the common unit of currency. So if the memory system is getting any computer time at all, the AI has beliefs about why it is good to remember things and what other cognitive tasks memory can contribute to. It's not just starting with an inscrutable piece of code that has no known purpose, and trying to "improve" it; it has an idea of what kind of labor the code is performing, and which other cognitive tasks that labor contributes to, and why. In the absence of such insight, it would indeed be more difficult for the AI to rewrite itself, and its development at that time would probably be dominated by human programmers pushing it along.

Ian C.: Eliezer, would a human that modifies the genes that control how his brain is built qualify as the same class of recursion (but with a longer cycle-time), or is it not quite the same?

Owing to our tremendous lack of insight into how genes affect brains, and owing to the messiness of the brain itself as a starting point, we would get relatively slow returns out of this kind of recursion even before taking into account the 18-year cycle time for the kids to grow up.

However, on a scale of returns from ordinary investment, the effect on society of the next generation being born with an average IQ of 140 (on the current scale) might be well-nigh inconceivable. It wouldn't be an intelligence explosion; it wouldn't be the kind of feedback loop I'm talking about - but as humans measure hugeness, it would be huge.

Reid: I'm sure you're aware of Schmidhuber's forays into this area with his Gödel Machine. Doesn't this blur the boundaries between the meta-cognitive and cognitive?

Schmidhuber's "Gödel Machine" is talking about a genuine recursion from object-level to metacognitive level, of the sort I described. However, this problem is somewhat more difficult than Schmidhuber seems to think it is, to put it mildly - but that would be part of the AIXI sequence, which I don't think I'll end up writing. Also, I think some of Schmidhuber's suggestions potentially hamper the system with a protected level.

Vassar: OTOH, it bizarrely appears to be the case that over a large range of chess ranks, human players seem to gain effective chess skill measured by chess rank with roughly linear training while chess programs gain it via exponential speed-up.

I expect that what you're looking at is a navigable search space that the humans are navigating and the AI is grasping through brute-force techniques - yes, Deep Blue wasn't literally brute force, but it was still navigating raw Chess rather than Regularity in Chess. If you're searching the raw tree, returns are logarithmic; the human process of grokking regularities seems to deliver linear returns over practice with a brain in good condition. However, with Moore's Law in play (exponential improvements delivered by human engineers) the AIs outran the brains.

Humans getting linear returns where dumb algorithms get logarithmic returns, seems to be a fairly standard phenomenon in my view - consider natural selection trying to go over a hump of required simultaneous changes, for example.

Tim Tyler: Brainpower went into making new brains historically - via sexual selection. Feedback from the previous generation of brains into the next generation has taken place historically.

If no one besides me thinks this claim is credible, I'll just go ahead and hold it up as an example of the kind of silliness I'm talking about, so that no one accuses me of attacking a strawman.

(Quick reductio: Imagine Jane Cavewoman falling in love with Johnny Caveman on the basis of a foresightful extrapolation of how Johnny's slightly mutated visual cortex, though not useful in its own right, will open up the way for further useful mutations, thus averting the unforesightful basis of natural selection... Sexual selection just applies greater selection pressure to particular characteristics; it doesn't change the stupid parts of evolution at all - in fact, it often makes evolution even more stupid by decoupling fitness from characteristics we would ordinarily think of as "fit" - and this is true even though brains are involved. Missing this and saying triumphantly, "See? We're recursive!" is an example of the overeager rush to apply nice labels that I was talking about earlier.)

Drucker: The problem, as I see it, is that you can't take bits out of a running piece of software and replace them with other bits, and have them still work, unless said piece of software is trivial... The human brain is a mass of interconnecting systems, all tied together in a mish-mash of complexity. You couldn't upgrade any one part of it by finding a faster replacement for any one section of it. Attempting to perform brain surgery on yourself is going to be a slow, painstaking process, leaving you with far more dead AIs than live ones.

As other commenters pointed out, plenty of software is written to enable modular upgrades. An AI with insight into its own algorithms and thought processes is not making changes by random testing like it was bloody evolution or something. A Friendly AI uses deterministic abstract reasoning in this case - I guess I'd have to write a post about how that works to make the point, though.

A poorly written AI might start out as the kind of mess you're describing, and of course, also lack the insight to make changes better than random; and in that case, would get much less mileage out of self-improvement, and probably stay inert.

Douglas: I was going by Eliezer's description. It wasn't smart enough to improve itself anymore [quickly?]. If nothing else, a Human level AI should be able to think more about the problem for a while, profile its thought process, and hardcode that process.

Eliezer: part of the AIXI sequence, which I don't think I'll end up writing.

Ahh, that's a shame, though fully understood. Don't suppose you (or anyone) can link to some literature about AIXI? Haven't been able to find anything comprehensive yet comprehensible to an amateur.

Tim Tyler: Brainpower went into making new brains historically - via sexual selection. Feedback from the previous generation of brains into the next generation has taken place historically.

Tim, Dawkins has a nice sequence in The Blind Watchmaker about a species of bird in which the female began selecting for big, lustrous tails. This led to birds with tails so big they could barely fly to escape predators. While selecting for intelligence in a partner is obviously plausible, I'd have to see very compelling evidence that it's leading to continuously smarter people, or even that it ever could. Possibly a loop of some description there, but K definitely < 1.

However!

Owing to our tremendous lack of insight into how genes affect brains

So what happens when we start to figure out what those genes do? and then start switching them on and off, and gaining more knowledge and insight into how the brain attacks problems? As we've read recently, natural selection increased brainpower (insight) massively through blind stumbling onto low-hanging fruit in a relatively small amount of time. Why would we suppose it reached any sort of limit - or at least a limit we couldn't surmount? The 18 years to maturity thing is pretty irrelevant here, as long as, say, 5% compound insight can be gained per generation. You're still looking at exponential increases, and you might only need a handful of generations before the FOOM itself switches medium.

Ben, that would indeed be the path of humanity's future, I expect, if not for those pesky computers - and the possibility of nanotech and uploading - and all the other interesting things that would happen long before another ten cycles of the flesh.

"I suspect that human economic growth would naturally tend to be faster and somewhat more superexponential, if it were not for the negative feedback mechanism of governments and bureaucracies with poor incentives, that both expand and hinder whenever times are sufficiently good that no one is objecting strongly enough to stop it; when "economic growth" is not the issue of top concern to everyone, all sorts of actions will be taken to hinder economic growth; when the company is not in immediate danger of collapsing, the bureaucracies will add on paperwork; and universities just go on adding paperwork indefinitely."

You may consider expanding this into a post - it is one of the most important insights for the shorter term future I have yet seen on OB (or most other blogs).

A number of people are objecting to Eliezer's claim that the process he is discussing is unique in its FOOM potential, proposing other processes that are similar. Then Eliezer says they aren't similar.

Whether they're similar enough depends on the analysis you want to do. If you want to glance at them and come up with yes or no answer regarding FOOM, then none of them are similar. A key difference is that these other things don't have continual halving of the time per generation. You can account for this when comparing results, but I haven't seen anyone do this.

But some things are similar enough that you can gain some insights into the AI FOOM potential by looking at them. Consider the growth of human societies. A human culture/civilization/government produces ideas, values, and resources used to rewrite itself. This is similar to the AI FOOM dynamics, except with constant and long generation times.

To a tribesman contemplating the forthcoming culture FOOM, it would look pretty simple: Culture is about ways for your tribe to get more land than other tribes.

As culture progressed, we developed all sorts of new goals for it that the tribesman couldn't have predicted.

Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably. (Further analysis of this is difficult since we haven't agreed what "intelligence" is.)

Another lesson to learn from culture has to do with complexity. The tribesman, given some ideas of what technology and government would do, would suppose that it would solve all problems. But in fact, as cultures grow more capable, they are able to sustain more complexity; and so our problems get more and more complicated. The idea that human stupidity is holding us back, and AIs will burst into exponential territory once they shake free of these shackles:

I suspect that human economic growth would naturally tend to be faster and somewhat more superexponential, if it were not for the negative feedback mechanism of governments and bureaucracies with poor incentives, that both expand and hinder whenever times are sufficiently good that no one is objecting strongly enough to stop it
is like that tribesman thinking good government will solve all problems. Systems - societies, governments, AIs - expand to the limits of complexity that they can support; at those limits, actions have unintended consequences and agents have not quite enough intelligence to predict them or agree on them, and in efficiency and "stupidity" - relative stupidity - lives on.

I'll respond to Eliezer's response to my response later today. Short answer: 1. Diminishing returns exist and are powerful. 2. This isn't something you can eyeball. If you want to say FOOM is probable; fine. If you want to say FOOM is almost inevitable, I want to see equations worked out with specific numbers. You won't convince me with handwaving, especially when other smart people are waving their hands and reaching different conclusions.

I wrote:

Analogously, our discussion of the AI FOOM supposes that the AI will not discover new avenues to pursue other than intelligence, that soak up enough of the FOOM to slow down the intelligence part of the FOOM considerably.
What I wish I'd said is: What percentage of the AI's efforts will go into algorithm, architecture, and hardware research?

At the start, probably a lot; so this issue may not be important wrt FOOM and humans.

One source of diminishing returns is upper limits on what is achievable. For instance, Shannon proved that there is an upper bound on the error-free communicative capacity of a channel. No amount of intelligence can squeeze more error-free capacity out of a channel than this. There are also limits on what is learnable using just induction, even with unlimited resources and unlimited time (cf "The Logic of Reliable Inquiry" by Kevin T. Kelly). These sorts of limits indicate that an AI cannot improve its meta-cognition exponentially forever. At some point, the improvements have to level off.

Sexual selection is at the root of practically all the explanations for the origin of our large brains. To quote from Sue Blackmore:

An influential version of social theory is the ‘Machiavellian Intelligence’ hypothesis (Byrne and Whiten 1988; Whiten and Byrne 1997). Social interactions and relationships are not only complex but also constantly changing and therefore require fast parallel processing (Barton and Dunbar 1997). The similarity with Niccolò Machiavelli (1469–1527), the devious adviser of sixteenth-century Italian princes, is that much of social life is a question of outwitting others, plotting and scheming, entering into alliances and breaking them again. All this requires a lot of brain power to remember who is who, and who has done what to whom, as well as to think up ever more crafty wiles, and to double bluff the crafty wiles of your rivals – leading to a spiralling arms race. ‘Arms races’ are common in biology, as when predators evolve to run ever faster to catch their faster prey, or parasites evolve to outwit the immune systems of their hosts. The notion that some kind of spiralling or self-catalytic process is involved certainly suits what Christopher Wills (1993) calls ‘the runaway brain’, and this idea is common among theories that relate language evolution to brain size.

The idea that intelligence can play an important role in evolutionary change arises from the observation that intelligent agents are doing the selecting - in sexual selection. They get to use induction, deduction, analogies, prediction - the whole toolkit of intelligence - and the results are then reflected in the germ line of the next generation. Any idea that the loop between intelligence and brain design information has only closed recently - or has yet to close - is simply wrong. Human brains have been influencing human brain evolution for millions of years - in the same way that they have been influencing dog evolution for millions of years - by acting as the selective agent.

Brains don't just choose. They also create circumstances where there is a lot of information on which to make choices. The mating dance brainy-females often lead males on exposes their parasite load - and so their genetic quality - to selection. So: brains do not just select - they actively create opportunties for selection by intelligence to have a large influence.

If you look at female secondary sex charateristics, they are the products of mind, written back onto the body by evolution. Breasts, red lips - and so on are physical projections of mental analogy-making equipment.

Of course the other classic example of the mental turning into the genetic is the Baldwin effect - where learned, acquired characteristics find their way out of the minds in which they arose and back into the gene pool.

Evolution is no stranger to the action of intelligence - indeed, without millions of intelligent choices by our ancestors, the human race as we know it today would not exist.

John: Given any universe whose physics even resembles in character our current standard model, there will be limits to what you can do on fixed hardware and limits to how much hardware you can create in finite time.

But if those limits are far, far, far above the world we think of as normal, then I would consider the AI-go-FOOM prediction to have been confirmed. I.e., if an AI builds its own nanotech and runs off to disassemble Proxima Centauri, that is not infinite power but it is a whole lot of power and worthy of the name "superintelligence".

If you want to say FOOM is probable; fine. If you want to say FOOM is almost inevitable, I want to see equations worked out with specific numbers.

The explosion in computing capability is a historical phenomenon that has been going on for decades. For "specific numbers", for example, look at the well-documented growth of the computer industry since the 1950s. Yes, there are probably limits, but they seem far away - so far away, we are not even sure where they are, or even whether they exist.

"Sexual selection is at the root of practically all the explanations for the origin of our large brains."

Ooh, you triggered one of my cached rants.

Practically all of those explanations start by saying something like, "It's a great mystery how humans got so smart, since you don't need to be that smart to gather nuts and berries."

And that shows tremendous ignorance of how much intelligence is needed to be a hunter-gatherer. (Much more than is needed to be a modern city-dweller.) Most predators have a handful of ways of catching prey; primitive humans have thousands. Just enumerating different types of snares and traps used would bring us over 100.

The things that point towards sexual selection being involved are rapid growth, extreme growth, any sexual dimorphism in the organ involved, and its costliness.

The alternative to sexual selection is natural selection - the idea that big brains helped our ancestors to survive. This seems less plausible as a driver of a large brain - since mere survival is quite a bit easier than survival and successfully raising numerous babies.

"The explosion in computing capability is a historical phenomenon that has been going on for decades. For "specific numbers", for example, look at the well-documented growth of the computer industry since the 1950s. Yes, there are probably limits, but they seem far away - so far away, we are not even sure where they are, or even whether they exist."

The growth you are referring to has a hard upper limit which is when transistors are measured in angstroms, at the point when they start playing by the rules of quantum mechanics. That is the hard upper limit of computing that you are referring to. Now if we take quantum computing that may or may not take us further there has been a lot of work done recently that casts doubt on quantum computing and its ability to solve a lot of our computing issues. There are a lot of other possible computing technologies it is just not clear which one will emerge at the top yet.

Pearson: I think you have a tendency to overlook our lack of knowledge of how the brain works. You talk of constant brain circuitry, when people add new hippocampal cells through their life. We also expand the brain areas devoted to fingers if we are born blind and use braille.

Pearson, "constant brains" means "brains with constant adaptation-algorithms, such as an adaptation-algorithm for rewiring via reinforcement" not "brains with constant synaptic networks". I think a bit of interpretive charity would have been in order here. We don't know how deep the rabbit hole of adaptation goes. Are there constant adaptation-algortihms? Constant adaptation algorithms are not a prerequisite for an optimization process, evolution being the cannonical example. It gets by with a very changeable adaptation-algorithm embodied in the varieties of genetic transfer, reproduction rates etc. We have barely scratched the surface of adaption systems, assuming a constant adaptation-algorithm for intelligence is premature, as far as I am concerned.

Eliezer: If "AI goes FOOM" means that the AI achieves super-intelligence within a few weeks or hours, then it has to be at the meta-cognitive level or the resource-overhang level (taking over all existing computer cycles). You can't run off to Proxima Centauri in that time frame.

"For "specific numbers", for example, look at the well-documented growth of the computer industry since the 1950s."

You would need to show how to interpret those numbers applied to the AI foom.

I'd rather see a model for AI foom built from the ground up, and ranges of reasonable values posited, and validated in some way.

This is a lot of work, but after several years working on the problem, it's one that ought to have a preliminary answer.

Increasingly powerful machines result in increasingly powerful machine intelligence. We built most of those machines to augment human abilities, including - prominently - human intelligence. The faster they get the smarter they are - since one component of intelligence is speed.

[-]luzr00

" The faster they get the smarter they are - since one component of intelligence is speed."

I think this might be incorrect. The speed means that you can solve the problem faster, not that you can solve more complex problem.

Intelligence tests are timed for a good reason. If you see intelligence as an optimisation process, it is obvious why speed matters - you can do more trials. There are some cases where it's optimisation / trials that's important - but plenty where it's just getting the answer quickly that matters.

Tim:

It seems to me you are being almost deliberately obtuse. Of course the brain has developed over the long course of evolution via sexual selection. The same process happens in parakeets. They have brains because they evolved brains. Some of that brain power went to making them better at catching prey, and some went to making them better mating-call singers, leading to larger brain size in the next generation. Humans just happen to be the brainiest of all creatures; but the mechanism is the same. You might as well argue that the "intelligence explosion" starts with the first prokaryote.

Ditto your argument about machines being used to improve machines. You know, cave dwellers used tools to improve their tools. Is that when the "intelligence explosion" began with recursion?

You are not even talking about the same topic that Eliezer is. I'm a low-ranking grunt in the rationalist army, and usually I just lurk here because I feel often feel out of my depth. But it's frustrating to see the thread get jacked by an argument that has no real content.

The problem you are having is that all of your individual points are true as far as they go, but you're just restating agreed-upon facts using some borrowed terms to make it sound grand and important. I suggest you re-read the posts Eliezer linked to about the "virtue of narrowness" and "sounding wise versus being wise."

Now, back to my lurker's cave.

[-]luzr00

"Intelligence tests are timed for a good reason. If you see intelligence as an optimisation process, it is obvious why speed matters - you can do more trials."

Inteligence tests are designed to measure performance of human brain.

Try this: Strong AI running on 2Ghz CPU. You reduce it to 1Ghz, without changing anything else. Will it make less intelligent? Slower, definitely.

Of course the brain has developed over the long course of evolution via sexual selection.

This is not obvious - and indeed for a long time it was not even a popular theory. Many people thought tool use - and its resulting survival characteristics was the important factor:

"Most traditional theories, including that of Charles Darwin, suggested some combination of tool use and hunting were the key selective pressures favoring big brains, but increasing evidence of hunting and tool use in other species such as chimpanzees indicates our ancestors were not unique in that regard," Flinn said. "The most exceptional of our mental gifts involves understanding what is going on in other people's minds by using skills such as empathy and self-awareness.

Still today, one viable theory is that the human brain developed as a result of nutritional constraints being lifted - as a result of a diet including meat and seafood providing omega-rich fatty acids in abundance (see "The Driving Force") - and that theory makes little reference to sexual selection.

You might as well argue that the "intelligence explosion" starts with the first prokaryote.

I refer to that as the "technology explosion" (though strictly that began much earlier). The term "intelligence" is usually associated with organisms which have brains - and I do indeed argue that the explosion began with the origin of brains.

You know, cave dwellers used tools to improve their tools. Is that when the "intelligence explosion" began with recursion?

In my view, the intelligence explosion is best though of as beginning with the origin of animal brains - since that is when we have evidence that brain size began increasing exponentially. So the answer to your question is "no": the intelligence explosion did not begin with cave dwellers. It is curious that you would ask such a wrong question if you had read my essay on the subject.

I am not the one using the "recursion" terminology, without saying clearly what is doing the recursing. My point is if what is doing the recursing is "intelligence" then this is not a new trick for evolution - far from it. Intelligence has been intimately involved in the origin of the next generation for millions of years. So it needs to be specified what type of recursion is going on - else the whole point falls flat.

What is going to be new in the future? It's not intelligence - but it is intelligent design. Previously we only really had intelligent selection. Intelligent mutations were there too in principle - but they had to be time-consumingly transferred into the germ-line via the Baldwin effect.

Eliezer, your basic error regarding the singularity is the planning fallacy. And a lot of people are going to say "I told you so" sooner or later.

Unknown, it can't literally be a planning fallacy because I'm not giving either a plan (a series of exact events) or a timeframe. You perhaps meant to say "conjunction fallacy"? If so, you're still wrong. Listing a dozen different ways that things can exhibit sharp jumps is a disjunction, not a conjunction. "Too far" is a powerful accusation, but I don't think that either "planning fallacy" or "conjunction fallacy" should qualify here.

Because it shows that with constant optimization pressure from natural selection and no intelligent insight, there were no diminishing returns to a search for better brain designs up to at least the human level. There were probably accelerating returns (with a low acceleration factor). There are no visible speedbumps, so far as I know.

Were the brain designs better because they were more powerful or more intelligent?

That is how many of the improvements were adding more resources to the brain (because they paid off in this evolutionary case), rather than adding more efficient programs/systems.

it's as simple as identifying "recursive self-improvement" as a term with positive affective valence, so you figure out a way to apply that term to humanity, and then you get a nice dose of warm fuzzies.

recursive self-improvement is possible and has been effortlessly achieved by many people.

"Intelligence tests are timed for a good reason. If you see intelligence as an optimisation process, it is obvious why speed matters - you can do more trials."

Regarding the differential equations math: Isn't this pretty much exactly what Chalmers' "proportionality thesis" formalizes into? And additionally, since m is in the exponent, wouldn't that make 2 AI teams working with different amounts of hardware like investors investing their money at different interest rates?