My problem with the focus on the idea of intelligence explosion is that it's too often presented as motivating the problem of FAI, but it's really not, it's a strategic consideration right there besides Hanson's malthusian ems, killer biotech and cognitive modification, one more thing to make the problem urgent, but still one among many.
What ultimately matters is implementing humane value (which involves figuring out what that is). The specific manner in which we lose ability to do so is immaterial. If intelligence explosion is close, humane value will lose control over the future quickly. If instead we change our nature through future cognitive modification tech, or by experimenting on uploads, then the grasp of humane value on the future will fail in orderly manner, slowly but just as irrevocably yielding control over to wherever the winds of value drift blow.
It's incorrect to predicate the importance, or urgency of gaining FAI-grade understanding of humane value on possibility of intelligence explosion. Other technologies that would allow value drift are for all purposes similarly close.
(That said, I do believe AGIs lead to intelligence explosions. This point is important to appreciate the impact and danger of AGI research, if complexity of humane value is understood, and to see one form that implementation of a hypothetical future theory of humane value could take.)
Too much theory, not enough empirical evidence. In theory, FAI is an urgent problem that demands most of our resources (Eliezer is on the record saying that the only two legitimate occupations are working on FAI, and earning lots of money so you can donate money to other people working on FAI).
In practice, FAI is just another Pascal's mugging/ lifetime dilemma/ St. Petersburg Paradox. From XiXIDu's blog:
...To be clear, extrapolations work and often are the best we can do. But since there are problems such as the above, that we perceive to be undesirable and that lead to absurd consequences, I think it is reasonable to ask for some upper and lower bounds regarding the use and scope of certain heuristics.
[...]
Taking into account considerations of vast utility or low probability quickly leads to chaos theoretic considerations like the butterfly effect. As a computationally bounded and psychical unstable agent I am unable to cope with that. Consequently I see no other way than to neglect the moral impossibility of extreme uncertainty.
Until [various rationality puzzles] are resolved, or sufficiently established, I will continue to put vastly more weight on empirical evidence and my intu
(2) the claim that intelligence explosion is likely to occur within the next 150 years
A "scientific" prediction with a time-frame of several decades and no clear milestones along the way is equivalent to a wild guess. From 20 Predictions of the Future (We’re Still Waiting For):
Weather Control: In 1966, a radio documentary, 2000 AD, was aired as a forum for various media and science personalities to discuss what life might be like in the year 2000. The primary theme running through the show concerned a prediction that no one in the year 2000 would have to work more than a day or two a week, and our leisure time would go through the roof. With so much free time, you can imagine that we would not want our vacations or day trips ruined by nasty weather, and therefore we should quickly develop a way to control the weather, shaping it to our needs. Taking the lighting from the clouds or the wind from the tornadoes were among the predictions, yet they were careful to note that we might not take weather control too far because of political reasons. Unfortunately, we here in the 2000’s still work full weeks, and we still get our picnics rained out from time to time.
One could argue that weather control is an easier problem than AGI (e.g. powerful enough equipment could "unwind" storms and/or redirect weather masses).
I boldly claim that my criticisms are better than those of the Tech Luminaries:
Also see:
Crocker’s rules declared, because I expect this may agitate some people:
(1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization.
I accept (1) and (3). Where I depart somewhat from the LW consensus is in the belief that anyone is going to accept the idea that the singularity (in its intelligence explosion form) should go ahead, without some important intervening stages that are likely to last for longer than 150 years.
CEV is a bad idea. I am sympathetic towards the mindset of the people who advocate it, but even I would be in the pitchfork-wielding gang if it looked like someone was actually going to implement it. Try to imagine that this was actually going to happen next year, rather than being a fun thing discussed on an internet forum – beware far mode bias. To quote Robin Hanson in a recent OB post:
...Immersing ourselves in images of things large in space, time, and social distance puts us into a “transcendant” far mode where positive feelings are strong, our basic ideals are more visible than practica
This is like the kid version of this page. Where are the good opposing arguments? These are all terrible...
Something about this page bothers me - the responses are included right there with the criticisms. It just gives off the impression that a criticism isn't going to appear until lukeprog has a response to it, or that he is going to write the criticism in a way that makes it easy to respond to, or something.
Maybe it's just me. But if I wanted to write this page, I would try and put myself into the mind of the other side and try to produce the most convincing smackdown of the intelligence explosion concept that I could. I'd think about what the responses would be, but only so that I could get the obvious responses to those responses in first. In other words, aim for DH7.
The responses could be collected and put on another page, or included here when this page is a bit more mature. Does anyone think this approach would help?
Distinguish positive and negative criticisms: Those aimed at demonstrating the unlikelihood of an intelligence explosion and those aimed at merely undermining the arguments/evidence for the likelihood of an intelligence explosion (thus moving the posterior probability of the explosion closer to its prior probability).
Here is the most important negative criticism of the intelligence explosion: Possible harsh diminishing returns of intelligence amplification. Let f(x, y) measure the difficulty (perhaps in expected amount of time to complete development) for an intelligence of IQ x to engineer an intelligence of IQ y. The claim that intelligence explodes is roughly equivalent to the thesis that f(x, x+1) decreases relatively quickly. What is the evidence for this claim? I haven't seen a huge amount. Chalmers briefly discusses the issue in his article on the singularity and points to how amplifying a human being's intelligence from average to Alan Turing's level has the effect of amplifying his intelligence-engineering ability from more or less nil to being able to design a basic computer. But "nil" and "basic computer" are strictly stupider than "average human...
the claim that intelligence explosion is likely to occur within the next 150 years
"We have made little progress toward cross-domain intelligence."
That is, while human-AI comparison is turning to the advantage of AIs, so-called, in an increasing number of narrow domains, the goal of cross-domain generalization of insight seems as elusive as ever, and there doesn't seem to be a hugely promising angle of attack (in the sense that you see AI researchers swarming to explore that angle).
Meat brains are badly constrained in ways that non-meat brains need not be.
Agreed; and there's an overbroad reading of this claim, which I'm kind of worried people encountering it (e.g. in the guise of Eliezer's argument on "the space of all possible minds") can inadvertently fall into: assuming that just because we can't imagine them, there are no constraints that apply to any class of non-meat brains.
The movie that runs through our minds when we imagine "AGI recursive self-improvement" goes something like a version of Hollywood hacker movies, except with the AI in the role of the hacker. It's sitting at a desk wearing mirrorshades, and looking for the line in its own code that has the parameter for "number of working memory items". When it finds it, it goes "aha!" and suddenly becomes twice as powerful as before.
That is vivid, but probably not how it works. For instance, "number of working memory items" can be a functional description of the system, without having an easily identifiable bit of the code where it's determined, just as well in an AI's substrate as as in a human mind.
"Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system."
How do you know this? I would like some more argument behind this response. In particular, what if some things are impossible? For instance, it might be true that cold fusion is unacheivable, we will never travel faster than the speed of light (even by cheating) and nanotech suffers some hard limits.
Science fiction becomes science fact
I grate my teeth whenever someone intentionally writes this cliche; "science fact" isn't a noun phrase one would use in any other context.
What phrase would you use to describe the failure to produce an AGI over the last 50 years? I suspect that 50 years from now we will might be saying "Wow that was hard, we've learnt a lot, specific kinds of problem solving work well, and computers are really fast now but we still don't really know how to go about creating an AGI". In other words the next 50 years might strongly resemble the last 50 from a very high level view.
In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
We are amassing vast computational capacities without yet understanding much of how to use them effectively, period. The Raspberry Pi exceeds the specs of a Cray-1, and costs a few hundred thousand times less. What will it be used for? Playing games, mostly. And probably not games that make us smarter.
A common criticism is that intelligence isn't defined, is poorly defined, cannot be defined, can only be defined relative to human beings, etc.
I guess you could lump general criticism of the computationalist approach to cognition in here (dynamicism, embodiment, ecological psychology, etc). Perhaps intelligence explosion scenarios can be constructed for alternative approaches but they seem far from obvious.
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Even for someone who hasn't read the sequences, that sounds like a pretty huge "unless." If the two don't have exactly the same interests, why wouldn't their interests interfere?
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
Why not? I can think of a couple possible explanations, but none that hold up to scrutiny. I'm probably missing something.
Humans can't alter their neural structure. This strikes me as unlikely. It is possible to cre
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Problem of scale. AGI would let us have 6 trillion general intelligences. Having a thousand intelligences that like working without pay for every human that exists "won't be a big deal"?
I think otherwise. And that's even assuming these AGIs are 'merely' "roughly human-equivalent".
There are two kinds of people here. Those who thinks that an intelligence explosion is unlikely, and those who thinks it is uncontrollable.
I think it is likely AND controllable.
On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
Wouldn't this make more sense on the wiki?
Another criticism that's worth mentioning is an observational rather than modeling issue: If AI were a major part of the Great Filter then we'd see it in the sky from when the AGIs started to control space around them at a substantial fraction of c. This should discount the probability of AGIs undergoing intelligence explosions. How much it should do so is a function of how much of the Great Filter one thinks is behind us and how much one thinks is in front of us.
You are missing a train of argument which trumps all of these lines of criticism: the intelligence explosion is already upon us. Creating a modern microprocessor chip is a stupendously complex computational task that is far far beyond the capabilities of any number of un-amplified humans, no matter how intelligent. There was a time when chips were simple enough that a single human could do all of this intellectual work, but we are already decades past that point.
Today new chips are created by modern day super-intelligences (corporations) which in turn ar...
Computational complexity may place strong limits on how much recursive self-improvement can occur, especially in a software context. See e.g. this prior discussion and this ongoing one. In particular, if P is not equal to NP in a strong sense this may place serious limits on software improvement.
The main problems that I see are as Eliezer told in the Singularity Summit: there are problems regarding AGI that we don't know how to solve even in principle(I'm not sure if this applies to AGI ingeneral or only to FAGI). So it might well be that we won't solve these problems ever.
The most difficult part will be to ensure the friendliness of the AI. The biggest danger is someone else carelessly making an AGI that is not friendly.
I have a reason to believe it has less than a 50% chance of being possible. Does that count?
I figure after the low-hanging fruit is taken care of, it simply becomes a question of if a unit of additional intelligence is enough to add another additional unit of intelligence. If the feedback constant is less than one, intelligence growth stops. If it is greater, the intelligence grows until it falls below one. It will vary somewhat with intelligence, and it would have to fall below one eventually. We have no way of knowing what the feedback constant is, so we...
On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
[Under construction.]
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Example: "I see no reason to single out AI as a mould-breaking technology: we already have billions of humans." (Deutsch, The Beginning of Infinity, p. 456.)
Response: The advantages of mere digitality (speed, copyability, goal coordination) alone are transformative, and will increase the odds of rapid recursive self-improvement in intelligence. Meat brains are badly constrained in ways that non-meat brains need not be.
"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."
Example: "If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have." (Hawkins, Tech Luminaries Address Singularity)
Response: Intelligence defined as optimization power doesn't necessarily need experience or learning from the external world. Even if it did, a superintelligent machine spread throughout the internet could gain experience and learning from billions of sub-agents all around the world simultaneously, while near-instantaneously propagating these updates to its other sub-agents.
"There are hard limits to how intelligent a machine can get."
Example: "The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run... Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this." (Hawkins, Tech Luminaries Address Singularity)
Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system.
"AGI won't be malevolent."
Example: "No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.'" (Hawkins, Tech Luminaries Address Singularity)
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Response: True. But most runaway machine superintelligence designs would kill us inadvertently. "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."
"If intelligence explosion was possible, we would have seen it by now."
Example: "I don't believe in technological singularities. It's like extraterrestrial life--if it were there, we would have seen it by now." (Rodgers, Tech Luminaries Address Singularity)
Response: Not true.
"Humanity will destroy itself before AGI arrives."
Example: "the population will destroy itself before the technological singularity." (Bell, Tech Luminaries Address Singularity)
Response: This is plausible, though there are many reasons to think that AGI will arrive before other global catastrophic risks do.
"The Singularity belongs to the genre of science fiction."
Example: "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived." (Pinker, Tech Luminaries Address Singularity)
Response: This is not an issue of literary genre, but of probability and prediction. Science fiction becomes science fact several times every year. In the case of technological singularity, there are good scientific and philosophical reasons to expect it.
"Intelligence isn't enough; a machine would also need to manipulate objects."
Example: "The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans." (Moore, Tech Luminaries Address Singularity)
Response: Robotics is making strong progress in addition to AI.
"Human intelligence or cognitive ability can never be achieved by a machine."
Example: "Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines." (Lucas, Minds, Machines and Goedel)
Example: "Instantiating a computer program is never by itself a sufficient condition of [human-liked] intentionality." (Searle, Minds, Brains, and Programs)
Response: "...nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain... As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on.... we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behaviour and behavioural dispositions, where behaviour is construed operationally in terms of the physical outputs that a system produces." (Chalmers, The Singularity: A Philosophical Analysis)
"It might make sense in theory, but where's the evidence?"
Example: "Too much theory, not enough empirical evidence." (MileyCyrus, LW comment)
Response: "Papers like How Long Before Superintelligence contain some of the relevant evidence, but it is old and incomplete. Upcoming works currently in progress by Nick Bostrom and by SIAI researchers contain additional argument and evidence, but even this is not enough. More researchers should be assessing the state of the evidence."
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Example: "...an essential part of what we mean by foom in the first place... is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. [But] human engineers carry out exactly this sort of technology transfer on a routine basis." (rwallace, The Curve of Capability)
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
"A discontinuous break with the past requires lopsided capabilities development."
Example: "a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided... [But] the lopsidedness is not occurring [in computers]. Obviously computer technology hasn't lagged in symbol processing - quite the contrary." (rwallace, The Curve of Capability)
Example: "Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing." (Katja Grace, How Far Can AI Jump?)
Response: It doesn't seem that symbol processing was the missing capability that made humans so powerful. Calculators have superior symbol processing, but have no power to rule the world. Also: many kinds of lopsidedness are occurring in computing technology that may allow a sudden discontinuous jump in AI abilities. In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
"No small set of insights will lead to massive intelligence boost in AI."
Example: "...if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations." (Robin Hanson, Is the City-ularity Near?)
Example: "Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities... But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare." (Robin Hanson, The Betterness Explosion)
Response: An intelligence explosion doesn't require a breakthrough that improves all capabilities at once. Rather, it requires an AI capable of improving its intelligence in a variety of ways. Then it can use the advantages of mere digitality (speed, copyability, goal coordination, etc.) to improve its intelligence in dozens or thousands of ways relatively quickly.
To be added: