Summary
- There's a decent chance that the intelligence of a self-improving AGI will grow in a relatively smooth exponential or sub-exponential way, not super-exponentially or with large jump discontinuities.
- If this is the case, then an AGI whose effective intelligence matched that of the world's combined AI researchers would make AI progress at the rate they do, taking decades to double its own intelligence.
- The risk that the first successful AGI will quickly monopolize many industries, or quickly hack many of the computers connected to the internet, seems worth worrying about. In either case, the AGI would likely end up using the additional computing power it gained to self-modify so it was superintelligent.
- AI boxing could mitigate both of these risks greatly.
- If hard takeoff could be impossible, it might be best to assume this case and concentrate our resources on ensuring a safe soft takeoff, given that the prospects for a safe hard takeoff look grim.
Takeoff models discussed in the Hanson-Yudkowsky debate
The supercritical nuclear chain reaction model
Yudkowsky alludes to this model repeatedly, starting in this post:
When a uranium atom splits, it releases neutrons - some right away, some after delay while byproducts decay further. Some neutrons escape the pile, some neutrons strike another uranium atom and cause an additional fission. The effective neutron multiplication factor, denoted k, is the average number of neutrons from a single fissioning uranium atom that cause another fission...
It might seem that a cycle, with the same thing happening over and over again, ought to exhibit continuous behavior. In one sense it does. But if you pile on one more uranium brick, or pull out the control rod another twelve inches, there's one hell of a big difference between k of 0.9994 and k of 1.0006.
I don't like this model much for the following reasons:
- The model doesn't offer much insight in to the time scale over which an AI might self-improve. The "mean generation time" (time necessary for the next "generation" of neutrons to be released) of a nuclear chain reaction is short, and the doubling time for neutron activity in Fermi's experiment was just two minutes, but it hardly seems reasonable to generalize this to self-improving AIs.
- A flurry of insights that either dies out or expands exponentially doesn't seem like a very good description of how human minds work, and I don't think it would describe an AGI well either. Many people report that taking time to think about problems is key to their problem-solving process. It seems likely that an AGI unable to immediately generate insight in to some problem would have a slower and more exhaustive "fallback" search process that would allow it to continue making progress. (Insight could also work via a search process in the first place--over the space of permutations in one's mental model, say.)
The "differential equations folded on themselves" model
This is another model Eliezer alludes to, albeit in a somewhat handwavey fashion:
When you fold a whole chain of differential equations in on itself like this, it should either peter out rapidly as improvements fail to yield further improvements, or else go FOOM.
It's not exactly clear to me what the "whole chain of differential equations" is supposed to refer to... there's only one differential equation in the preceding paragraph, and it's a standard exponential (which could be scary or not, depending on the multiplier in the exponent. Rabbit populations and bank account balances both grow exponentially in a way that's slow enough for humans to understand and control.)
Maybe he's referring to the levels he describes here: metacognitive, cognitive, metaknowledge, knowledge, and object. How might we paramaterize this system?
Let's say c is our AGI's cognition ability, dc/dt is the rate of change in our AGI's cognitive ability, m is our AGI's "metaknowledge" (about cognition and metaknowledge), and dm/dt is the rate of change in metaknowledge. What I've got in mind is:
where p and q are constants.
In other words, both change in cognitive ability and change in metaknowledge are each individually directly proportionate to both cognitive ability and metaknowledge.
I don't know much about understanding systems of differential equations, so if you do, please comment! I put the above system in to Wolfram Alpha, but I'm not exactly sure how to interpret the solution provided. In any case, fooling around with this script suggests sudden, extremely sharp takeoff for a variety of different test parameters.
The straight exponential model
To me, the "proportionality thesis" described by David Chalmers in his singularity paper, "increases in intelligence (or increases of a certain sort) always lead to proportionate increases in the capacity to design intelligent systems", suggests a single differential equation that looks like
where u represents the number of upgrades that have been made to an AGI's source code, and s is some constant. The solution to this differential equation is going to look like
where the constant c1 is determined by our initial conditions.
(In Recursive Self-Improvement, Eliezer calls this a "too-obvious mathematical idiom". I'm inclined to favor it for its obviousness, or at least use it as a jumping-off point for further analysis.)
Under this model, the constant s is pretty important... if u(t) was the amount of money in a bank account, s would be the rate of return it was receiving. The parameter s will effectively determine the "doubling time" of an AGI's intelligence. It matters a lot whether this "doubling time" is on the scale of minutes or years.
So what's going to determine s? Well, if the AGI's hardware is twice as fast, we'd expect it to come up with upgrades twice as fast. If the AGI had twice as much hardware, and it could parallelize the search for upgrades perfectly (which seems like a reasonable approximation to me), we'd expect the same thing. So let's decompose s and make it the product of two parameters: h representing the hardware available to the AGI, and r representing the ease of finding additional improvements. The AGI's intelligence will be on the order of u * h, i.e. the product of the AGI's software quality and hardware capability.
Considerations affecting our choice of model
Diminishing returns
The consideration here is that the initial improvements implemented by an AGI will tend to be those that are especially easy to implement and/or especially fruitful to implement, with subsequent improvements tending to deliver less intelligence bang for the implementation buck. Chalmers calls this "perhaps the most serious structural obstacle" to the proportionality thesis.
To think about this consideration, one could imagine representing a given improvement as a pair of two values (u, d). u represents a factor by which existing performance will be multiplied, e.g. if u is 1.1, then implementing this improvement will improve performance by a factor of 1.1. d represents the cognitive difficulty or amount of intellectual labor to required to implement a given improvement. If d is doubled, then at any given level of intelligence, implementing this improvement will take twice as long (because it will be harder to discover and/or harder to translate in to code).
Now let's imagine ordering our improvements in order from highest to lowest u to d ratio, so we implement those improvements that deliver the greatest bang for the buck first.
Thus ordered, let's imagine separating groups of consecutive improvements in to "tiers". Each tier's worth of improvements, when taken together, will represent the doubling of an AGI's software quality, i.e. the product of the u's in that cluster will be roughly 2. For a steady doubling time, each tier's total difficulty will need sum to approximately twice the difficulty of the tier before it. If tier difficulty tends to more than double, we're likely to see sub-exponential growth. If tier difficulty tends to less than double, we're likely to see super-exponential growth. If a single improvement delivers a more-than-2x improvement, it will span multiple "tiers".
It seems to me that the quality of fruit available at each tier represents a kind of logical uncertainty, similar to asking whether an efficient algorithm exists for some task, and if so, how efficient.
On the this diminishing returns consideration, Chalmers writes:
If anything, 10% increases in intelligence-related capacities are likely to lead all sorts of intellectual breakthroughs, leading to next-generation increases in intelligence that are significantly greater than 10%. Even among humans, relatively small differences in design capacities (say, the difference between Turing and an average human) seem to lead to large differences in the systems that are designed (say, the difference between a computer and nothing of importance).
Eliezer Yudkowsky's objection is similar:
...human intelligence does not require a hundred times as much computing power as chimpanzee intelligence. Human brains are merely three times too large, and our prefrontal cortices six times too large, for a primate with our body size.
Or again: It does not seem to require 1000 times as many genes to build a human brain as to build a chimpanzee brain, even though human brains can build toys that are a thousand times as neat.
Why is this important? Because it shows that with constant optimization pressure from natural selection and no intelligent insight, there were no diminishing returns to a search for better brain designs up to at least the human level. There were probably accelerating returns (with a low acceleration factor). There are no visible speedbumps, so far as I know.
First, hunter-gatherers can't design toys that are a thousand times as neat as the ones chimps design--they aren't programmed with the software modern humans get through the education (some may be unable to count), and educating apes has produced interesting results.
Speaking as someone who's basically clueless about neuroscience, I can think of many different factors that might contribute to intelligence differences within the human race or between humans and other apes:
- Processing speed.
- Cubic centimeters brain hardware devoted to abstract thinking. (Gifted technical thinkers often seem to suffer from poor social intuition--perhaps a result of reallocation of brain hardware from social to technical processing.)
- Average number of connections per neuron within that brain hardware.
- Average neuron density within that brain hardware. This author seems to think that a large part of the human brain's remarkableness comes largely from the fact that it's the largest primate brain, and primate brains maintain the same neuron density when enlarged while other types of brains don't. "If absolute brain size is the best predictor of cognitive abilities in a primate (13), and absolute brain size is proportional to number of neurons across primates (24, 26), our superior cognitive abilities might be accounted for simply by the total number of neurons in our brain, which, based on the similar scaling of neuronal densities in rodents, elephants, and cetaceans, we predict to be the largest of any animal on Earth (28)."
- Propensity to actually use your capacity for deliberate System 2 reasoning. Richard Feynman's second wife on why she divorced him: "He begins working calculus problems in his head as soon as he awakens. He did calculus while driving in his car, while sitting in the living room, and while lying in bed at night." (By the way, does anyone know of research that's been done on getting people to use System 2 more? Seems like it could be really low-hanging fruit for improving intellectual output. Sometimes I wonder if the reason intelligent people tend to like math is because they were reinforced for the behaviour of thinking abstractly as kids (via praise, good grades, etc.) while those not at the top of the class were not so reinforced.)
- Extended neuroplasticity in to "childhood".
- Increased calories to think with due to the invention of cooking.
- And finally, mental algorithms ("software"). Which are probably at least somewhat important.
It seems to me like these factors (or ones like them) may multiply together to produce intelligence, i.e. the "intelligence equation", as it were, could be something like intelligence = processing_speed * cc_abstract_hardware * neuron_density * connections_per_neuron * propensity_for_abstraction * mental_algorithms. If the ancestral environment rewarded intelligence, we should expect all of these characteristics to be selected for, and this could explain the "low acceleration factor" in human intelligence increase. (Increasing your processing speed by a factor of 1.2 does more when you're already pretty smart, so all these sources of intelligence increase would feed in to one another.)
In other words, it's not that clear what relevance the evolution of human intelligence has to the ease and quality of the upgrades at different "tiers" of software improvements, since evolution operates on many non-software factors, but a self-improving AI (properly boxed) can only improve its software.
Bottlenecks
In the Hanson/Yudkowsky debate, Yudkowsky declares Douglas Englebart's plan to radically bootstrap his team's productivity though improving their computer and software tools "insufficiently recursive". I agree with this assessment. Here's my modelling of this phenomenon.
When a programmer makes an improvement to their code, their work of making the improvement requires the completion of many subtasks:
- choosing a feature to add
- reminding themselves of how the relevant part of the code works and loading that information in to their memory
- identifying ways to implement the feature
- evaluating different methods of implementing the feature according to simplicity, efficiency, and correctness
- coding their chosen implementation
- testing their chosen implementation, identifying bugs
- identifying the cause of a given bug
- figuring out how to fix the given bug
Each of those subtasks will consist of further subtasks like poking through their code, staring off in to space, typing, and talking to their rubber duck.
Now the programmer improves their development environment so they can poke through their code slightly faster. But if poking through their code takes up only 5% of their development time, even an extremely large improvement in code-poking abilities is not going to result in an especially large increase in his development speed... in the best case, where code-poking time is reduced to zero, the programmer will only work about 5% faster.
This is a reflection of Amdahl's Law-type thinking. The amount you can gain through speeding something up depends on how much it's slowing you down.
Relatedly, if intelligence is a complicated, heterogeneous process where computation is spread relatively evenly among many modules, then improving the performance of an AGI gets tougher, because upgrading an individual module does little to improve the performance of the system as a whole.
And to see orders-of-magnitude performance improvement in such a process, almost all of your AGI's components will need to be improved radically. If even a few prove troublesome, improving your AGI's thinking speed becomes difficult.
Case studies in technological development speed
Moore's Law
It has famously been noted that if the automotive industry had achieved similar improvements in performance [to the semiconductor industry] in the last 30 years, a Rolls-Royce would cost only $40 and could circle the globe eight times on one gallon of gas—with a top speed of 2.4 million miles per hour.
From this McKinsey report. So Moore's Law is an outlier where technological development is concerned. I suspect that making transistors smaller and faster doesn't require finding ways to improve dozens of heterogeneous components. And when you zoom out to view a computer system as a whole, other bottlenecks typically appear.
(It's also worth noting that research budgets in the semiconductor field have also risen greatly in the semiconductor industry since its inception, but obviously not following the same curve that chip speeds have.)
Compiler technology
This paper on "Proebstig's Law" suggests that the end result of all the compiler research done between 1970 or so and 2001 was that a typical integer-intensive program was compiled to run 3.3 times faster, and a typical floating-point-intensive program was compiled to run 8.1 times faster. When it comes to making programs run quickly, it seems that software-level compiler improvements are swamped by hardware-level chip improvements--perhaps because, like an AGI, a compiler has to deal with a huge variety of different scenarios, so improving it in the average case is tough. (This represents supertask heterogeneity, rather than subtask heterogeneity, so it's a different objection than the one mentioned above.)
Database technology
According to two analyses (full paper for that second one), it seems that improvement in database performance benchmarks has largely been due to Moore's Law.
AI (so far)
Robin Hanson's blog post "AI Progress Estimate" was the best resource I could find on this.
Why smooth exponential growth implies soft takeoff
Let's suppose we consider all of the above, deciding that the exponential model is the best, and we agree with Robin Hanson that there are few deep, chunky, undiscovered AI insights.
Under the straight exponential model, if you recall, we had
where u is the degree of software quality, h is the hardware availability, and r is a parameter representing the difficulty of doing additional upgrades. Our AGI's overall intelligence is given by u * h--the quality of the software times the amount of hardware.
Now we can solve for r by substituting in human intelligence for u * h, and substituting in the rate of human AI progress for du/dt. Another way of saying this is: When the AI is as smart as all the world's AI researchers working together, it will produce new AI insights at the rate that all the world's AI researchers working together produce new insights. At some point our AGI will be just as smart as the world's AI researchers, but we can hardly expect to start seeing super-fast AI progress at that point, because the world's AI researchers haven't produced super-fast AI progress.
Let's assume AGI that's on par with the world AI research community is reached in 2080 (LW's median "singularity" estimate in 2011). We'll pretend AI research has only been going on since 2000, meaning 80 "standard research years" of progress have gone in to the AGI's software. So at the moment our shiny new AGI is fired up, u = 80, and it's doing research at the rate of one "human AGI community research year" per year, so du/dt = 1. That's an effective rate of return on AI software progress of 1 / 80 = 1.3%, giving a software quality doubling time of around 58 years.
You could also apply this kind of thinking to individual AI projects. For example, it's possible that at some point EURISKO was improving itself about as fast as Doug Lenat was improving it. You might be able to do a similar calculation to take a stab at EURISKO's insight level doubling time.
The importance of hardware
According to my model, you double your AGI's intelligence, and thereby the speed with which your AGI improves itself, by doubling the hardware available for your AGI. So if you had an AGI that was interesting, you could make it 4x as smart by giving it 4x the hardware. If an AGI that was 4x as smart could get you 4x as much money (through impressing investors, or playing the stock market, or monopolizing additional industries), that'd be a nice feedback loop. For maximum explosivity, put half your AGI's mind to the task of improving its software, and the other half to the task of making more money with which to buy more hardware.
But it seems pretty straightforward to prevent a non-superintelligent AI from gaining access to additional hardware with careful planning. (Note: One problem with AI boxing experiments thus far is that all of the AIs have been played by human beings. Human beings have innate understanding of human psychology and possess specialized capabilities for running emulations of one another. It seems pretty easy to prevent an AGI from acquiring such understanding. But there may exist box-breaking techniques that don't rely on understanding human psychology. Another note about boxing: FAI requires getting everything perfect, which is a conjunctive calculation. Given multiple safeguards, only one has to work for the box as a whole to work, which is a disjunctive calculation.)
AGI's impact on the economy
Is it possible that the first group to create a successful AGI might begin monopolizing different sections of the economy? Robin Hanson argues that technology insights typically leak between different companies, due to conferences and employee poaching. But we can't be confident these factors would affect the research an AGI does on itself. And if an AGI is still dumb enough that a significant portion of its software upgrades are coming from human researchers, it can hardly be considered superintelligent.
Given what looks like a winner-take-all dynamic, an important factor may be the number of serious AGI competitors. If there are only two, the #1 company may not wish to trade insights with the #2 company for fear of losing its lead. If there are more than two, all but the leading company might ally against the leading company in trading insights. If their alliance is significantly stronger than the leading company, perhaps the leading company would wish to join their alliance.
But if AI is about getting lots of details right, as Hanson suggests, improvements may not even transfer between different AI architectures.
What should we do?
I've argued that soft takeoff is a strong possibility. Should that change our strategy as people concerned with x-risk?
If we are basically screwed in the event that hard takeoff is possible, it may be that preparing for a soft takeoff is a better use of resources on the margin. Shane Legg has proposed that people concerned with friendliness become investors in AGI projects so they can affect the outcome of any that seem to be succeeding.
Concluding thoughts
Expert forecasts are famously unreliable even in the relatively well-understood field of political forecasting. So given the number of unknowns involved in the emergence of smarter-than-human intelligence, it's hard to say much with certainty. Picture a few Greek scholars speculating on the industrial revolution.
I don't have a strong background in these topics, so I fully expect that the above essay will reveal my ignorance, which I'd appreciate your pointing out in the comments. This essay should be taken as at attempt to hack away at the edges, not come to definitive conclusions. As always, I reserve the right to change my mind about anything ;)
It won't be any smarter at all actually, it will just have more relative time.
Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor - if you have to run experiments and wait for results, all you're really doing is decreasing the lag time between experiments, and even then only potentially.
Its not even as good as having 100 slaves work on a project (as someone else posited) because you're really just having ONE slave work on the project for 100 days; copying them 100 times likely won't help that issue.
This is one of the fundamental problems with the idea of the singularity in the first place; the truth is that designing more intelligent intelligences is probably HARDER than designing simpler ones, possibly by orders of magnitude, and it may not be scalar at all. If you look at rodent brains and human brains, there are numerous differences between them - scaling up a rodent brain to the same EQ as a human brain would NOT give you something as smart as a human, or even sapient.
You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology - the higher you get, the harder it is to go further.
Moreover, it isn't even clear what a "superhuman" intelligence even means. We don't even have any way of measuring intelligence absolutely that I am aware of - IQ is a statistical means, as are standardized tests. We can't say that human A is twice as smart as human B, and without such metrication it may be difficult to determine just how much smarter anything is than a human in the first place. If four geniuses can work together and get the same result as a computer which takes 1000 times as much energy to do the same task, then the computer is inefficient no matter how smart it is.
This efficiency is ANOTHER major barrier as well - human brains run off of cherrios, whereas any AI we build is going to be massively less efficient in terms of energy usage per cycle, at least for the foreseeable futures.
Another question is whether there is some sort of effective cap to intelligence given energy, heat dissipation, proximity of processing centers, ect. Given that we're only going to see microchips 256 times as dense on a plane as we have presently available, and given the various issues with heat dissipation of 3D chips (not to mention expense), we may well run into some barriers here.
I was looking at some stuff last night and while people claim we may be able to model the brain using an exascale computer, I am actually rather skeptical after reading up on it - while 150 trillion connections between 86 billion neurons doesn't sound like that much on the exascale, we have a lot of other things, such as glial cells, which appear to play a role in intelligence, and it is not unlikely that their function is completely vital in a proper simulation. Indeed, our utter lack of understanding of how the human brain works is a major barrier for even thinking about how we can make something more intelligent than a human which is not a human - its pretty much pure fantasy at this point. It may be that ridiculous parallelization with low latency is absolutely vital for sapience, and that could very well put a major crimp on silicon-based intelligences at all, due to their more linear nature, even with things like GPUs and multicore processors because the human brain is sending out trillions of signals with each step.
Some possibilities for simulating the human brain could easily take 10^22 FLOPS or more, and given the limitations of transistor-based computing, that looks like it is about the level of supercomputer we'd have in 2030 or so - but I wouldn't expect much better than that beyond that point because the only way to make better processors at that point is going up or out, and to what extent we can continue doing that... well, we'll have to see, but it would very likely eat up even more power and I would have to question the ROI at some point. We DO need to figure out how intelligence works, if only because it might make enhancing humans easier - indeed, unless intelligence is highly computationally efficient, organic intelligences may well be the optimal solution from the standpoint of efficiency, and no sort of exponential takeoff is really possible, or even likely, with such.
In many fields of technology, we see sigmoid curves, where initial advancements lead to accelerating returns until it becomes difficult to move further ahead without running up against hard problems or fundamental limits, and returns diminish.
Making an artificial intelligence as capable as a human intelligence may be difficult, but that doesn't mean that if we reach t... (read more)