On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
[Under construction.]
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Example: "I see no reason to single out AI as a mould-breaking technology: we already have billions of humans." (Deutsch, The Beginning of Infinity, p. 456.)
Response: The advantages of mere digitality (speed, copyability, goal coordination) alone are transformative, and will increase the odds of rapid recursive self-improvement in intelligence. Meat brains are badly constrained in ways that non-meat brains need not be.
"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."
Example: "If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have." (Hawkins, Tech Luminaries Address Singularity)
Response: Intelligence defined as optimization power doesn't necessarily need experience or learning from the external world. Even if it did, a superintelligent machine spread throughout the internet could gain experience and learning from billions of sub-agents all around the world simultaneously, while near-instantaneously propagating these updates to its other sub-agents.
"There are hard limits to how intelligent a machine can get."
Example: "The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run... Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this." (Hawkins, Tech Luminaries Address Singularity)
Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system.
"AGI won't be malevolent."
Example: "No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.'" (Hawkins, Tech Luminaries Address Singularity)
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Response: True. But most runaway machine superintelligence designs would kill us inadvertently. "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."
"If intelligence explosion was possible, we would have seen it by now."
Example: "I don't believe in technological singularities. It's like extraterrestrial life--if it were there, we would have seen it by now." (Rodgers, Tech Luminaries Address Singularity)
Response: Not true.
"Humanity will destroy itself before AGI arrives."
Example: "the population will destroy itself before the technological singularity." (Bell, Tech Luminaries Address Singularity)
Response: This is plausible, though there are many reasons to think that AGI will arrive before other global catastrophic risks do.
"The Singularity belongs to the genre of science fiction."
Example: "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived." (Pinker, Tech Luminaries Address Singularity)
Response: This is not an issue of literary genre, but of probability and prediction. Science fiction becomes science fact several times every year. In the case of technological singularity, there are good scientific and philosophical reasons to expect it.
"Intelligence isn't enough; a machine would also need to manipulate objects."
Example: "The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans." (Moore, Tech Luminaries Address Singularity)
Response: Robotics is making strong progress in addition to AI.
"Human intelligence or cognitive ability can never be achieved by a machine."
Example: "Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines." (Lucas, Minds, Machines and Goedel)
Example: "Instantiating a computer program is never by itself a sufficient condition of [human-liked] intentionality." (Searle, Minds, Brains, and Programs)
Response: "...nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain... As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on.... we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behaviour and behavioural dispositions, where behaviour is construed operationally in terms of the physical outputs that a system produces." (Chalmers, The Singularity: A Philosophical Analysis)
"It might make sense in theory, but where's the evidence?"
Example: "Too much theory, not enough empirical evidence." (MileyCyrus, LW comment)
Response: "Papers like How Long Before Superintelligence contain some of the relevant evidence, but it is old and incomplete. Upcoming works currently in progress by Nick Bostrom and by SIAI researchers contain additional argument and evidence, but even this is not enough. More researchers should be assessing the state of the evidence."
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Example: "...an essential part of what we mean by foom in the first place... is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. [But] human engineers carry out exactly this sort of technology transfer on a routine basis." (rwallace, The Curve of Capability)
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
"A discontinuous break with the past requires lopsided capabilities development."
Example: "a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided... [But] the lopsidedness is not occurring [in computers]. Obviously computer technology hasn't lagged in symbol processing - quite the contrary." (rwallace, The Curve of Capability)
Example: "Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing." (Katja Grace, How Far Can AI Jump?)
Response: It doesn't seem that symbol processing was the missing capability that made humans so powerful. Calculators have superior symbol processing, but have no power to rule the world. Also: many kinds of lopsidedness are occurring in computing technology that may allow a sudden discontinuous jump in AI abilities. In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
"No small set of insights will lead to massive intelligence boost in AI."
Example: "...if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations." (Robin Hanson, Is the City-ularity Near?)
Example: "Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities... But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare." (Robin Hanson, The Betterness Explosion)
Response: An intelligence explosion doesn't require a breakthrough that improves all capabilities at once. Rather, it requires an AI capable of improving its intelligence in a variety of ways. Then it can use the advantages of mere digitality (speed, copyability, goal coordination, etc.) to improve its intelligence in dozens or thousands of ways relatively quickly.
To be added:
- Massimo Pigliucci on Chalmers' Singularity talk
- XiXiDu on intelligence explosion as a disjunctive or conjunctive event, on intelligence explosion as a low-priority global risk, on basic AI drives
- Diminishing returns from intelligence amplification
My problem with the focus on the idea of intelligence explosion is that it's too often presented as motivating the problem of FAI, but it's really not, it's a strategic consideration right there besides Hanson's malthusian ems, killer biotech and cognitive modification, one more thing to make the problem urgent, but still one among many.
What ultimately matters is implementing humane value (which involves figuring out what that is). The specific manner in which we lose ability to do so is immaterial. If intelligence explosion is close, humane value will lose control over the future quickly. If instead we change our nature through future cognitive modification tech, or by experimenting on uploads, then the grasp of humane value on the future will fail in orderly manner, slowly but just as irrevocably yielding control over to wherever the winds of value drift blow.
It's incorrect to predicate the importance, or urgency of gaining FAI-grade understanding of humane value on possibility of intelligence explosion. Other technologies that would allow value drift are for all purposes similarly close.
(That said, I do believe AGIs lead to intelligence explosions. This point is important to appreciate the impact and danger of AGI research, if complexity of humane value is understood, and to see one form that implementation of a hypothetical future theory of humane value could take.)
This is a good point. I think there's one reason to give special attention to the intelligence explosion concept though... it's part of the proposed solution as well as one of the possible problems.
The two main ideas here are:
These ideas seem to be central to the utliity-maximizing FAI concept.