On this page I will collect criticisms of (1) the claim that intelligence explosion is plausible, (2) the claim that intelligence explosion is likely to occur within the next 150 years, and (3) the claim that intelligence explosion would have a massive impact on civilization. Please suggest your own, citing the original source when possible.
[Under construction.]
"AGI won't be a big deal; we already have 6 billion general intelligences on Earth."
Example: "I see no reason to single out AI as a mould-breaking technology: we already have billions of humans." (Deutsch, The Beginning of Infinity, p. 456.)
Response: The advantages of mere digitality (speed, copyability, goal coordination) alone are transformative, and will increase the odds of rapid recursive self-improvement in intelligence. Meat brains are badly constrained in ways that non-meat brains need not be.
"Intelligence requires experience and learning, so there is a limit to the speed at which even a machine can improve its own intelligence."
Example: "If you define the singularity as a point in time when intelligent machines are designing intelligent machines in such a way that machines get extremely intelligent in a short period of time--an exponential increase in intelligence--then it will never happen. Intelligence is largely defined by experience and training, not just by brain size or algorithms. It isn't a matter of writing software. Intelligent machines, like humans, will need to be trained in particular domains of expertise. This takes time and deliberate attention to the kind of knowledge you want the machine to have." (Hawkins, Tech Luminaries Address Singularity)
Response: Intelligence defined as optimization power doesn't necessarily need experience or learning from the external world. Even if it did, a superintelligent machine spread throughout the internet could gain experience and learning from billions of sub-agents all around the world simultaneously, while near-instantaneously propagating these updates to its other sub-agents.
"There are hard limits to how intelligent a machine can get."
Example: "The term 'singularity' applied to intelligent machines refers to the idea that when intelligent machines can design intelligent machines smarter than themselves, it will cause an exponential growth in machine intelligence leading to a singularity of infinite (or at least extremely large) intelligence. Belief in this idea is based on a naive understanding of what intelligence is. As an analogy, imagine we had a computer that could design new computers (chips, systems, and software) faster than itself. Would such a computer lead to infinitely fast computers or even computers that were faster than anything humans could ever build? No. It might accelerate the rate of improvements for a while, but in the end there are limits to how big and fast computers can run... Exponential growth requires the exponential consumption of resources (matter, energy, and time), and there are always limits to this." (Hawkins, Tech Luminaries Address Singularity)
Response: There are physical limits to how intelligent something can get, but they easily allow the intelligence required to transform the solar system.
"AGI won't be malevolent."
Example: "No intelligent machine will 'wake up' one day and say 'I think I will enslave my creators.'" (Hawkins, Tech Luminaries Address Singularity)
Example: "...it's more likely than not in my view that the two species will comfortably and more or less peacefully coexist--unless human interests start to interfere with those of the machines." (Casti, Tech Luminaries Address Singularity)
Response: True. But most runaway machine superintelligence designs would kill us inadvertently. "The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else."
"If intelligence explosion was possible, we would have seen it by now."
Example: "I don't believe in technological singularities. It's like extraterrestrial life--if it were there, we would have seen it by now." (Rodgers, Tech Luminaries Address Singularity)
Response: Not true.
"Humanity will destroy itself before AGI arrives."
Example: "the population will destroy itself before the technological singularity." (Bell, Tech Luminaries Address Singularity)
Response: This is plausible, though there are many reasons to think that AGI will arrive before other global catastrophic risks do.
"The Singularity belongs to the genre of science fiction."
Example: "The fact that you can visualize a future in your imagination is not evidence that it is likely or even possible. Look at domed cities, jet-pack commuting, underwater cities, mile-high buildings, and nuclear-powered automobiles--all staples of futuristic fantasies when I was a child that have never arrived." (Pinker, Tech Luminaries Address Singularity)
Response: This is not an issue of literary genre, but of probability and prediction. Science fiction becomes science fact several times every year. In the case of technological singularity, there are good scientific and philosophical reasons to expect it.
"Intelligence isn't enough; a machine would also need to manipulate objects."
Example: "The development of humans, what evolution has come up with, involves a lot more than just the intellectual capability. You can manipulate your fingers and other parts of your body. I don't see how machines are going to overcome that overall gap, to reach that level of complexity, even if we get them so they're intellectually more capable than humans." (Moore, Tech Luminaries Address Singularity)
Response: Robotics is making strong progress in addition to AI.
"Human intelligence or cognitive ability can never be achieved by a machine."
Example: "Goedel's theorem must apply to cybernetical machines, because it is of the essence of being a machine, that it should be a concrete instantiation of a formal system. It follows that given any machine which is consistent and capable of doing simple arithmetic, there is a formula which it is incapable of producing as being true---i.e., the formula is unprovable-in-the-system-but which we can see to be true. It follows that no machine can be a complete or adequate model of the mind, that minds are essentially different from machines." (Lucas, Minds, Machines and Goedel)
Example: "Instantiating a computer program is never by itself a sufficient condition of [human-liked] intentionality." (Searle, Minds, Brains, and Programs)
Response: "...nothing in the singularity idea requires that an AI be a classical computational system or even that it be a computational system at all. For example, Penrose (like Lucas) holds that the brain is not an algorithmic system in the ordinary sense, but he allows that it is a mechanical system that relies on certain nonalgorithmic quantum processes. Dreyfus holds that the brain is not a rule-following symbolic system, but he allows that it may nevertheless be a mechanical system that relies on subsymbolic processes (for example, connectionist processes). If so, then these arguments give us no reason to deny that we can build artificial systems that exploit the relevant nonalgorithmic quantum processes, or the relevant subsymbolic processes, and that thereby allow us to simulate the human brain... As for the Searle and Block objections, these rely on the thesis that even if a system duplicates our behaviour, it might be missing important ‘internal’ aspects of mentality: consciousness, understanding, intentionality, and so on.... we can set aside these objections by stipulating that for the purposes of the argument, intelligence is to be measured wholly in terms of behaviour and behavioural dispositions, where behaviour is construed operationally in terms of the physical outputs that a system produces." (Chalmers, The Singularity: A Philosophical Analysis)
"It might make sense in theory, but where's the evidence?"
Example: "Too much theory, not enough empirical evidence." (MileyCyrus, LW comment)
Response: "Papers like How Long Before Superintelligence contain some of the relevant evidence, but it is old and incomplete. Upcoming works currently in progress by Nick Bostrom and by SIAI researchers contain additional argument and evidence, but even this is not enough. More researchers should be assessing the state of the evidence."
"Humans will be able to keep up with AGI by using AGI's advancements themselves."
Example: "...an essential part of what we mean by foom in the first place... is that it involves a small group accelerating in power away from the rest of the world. But the reason why that happened in human evolution is that genetic innovations mostly don't transfer across species. [But] human engineers carry out exactly this sort of technology transfer on a routine basis." (rwallace, The Curve of Capability)
Response: Human engineers cannot take a powerful algorithm from AI and implement it in their own neurobiology. Moreover, once an AGI is improving its own intelligence, it's not clear that it would share the 'secrets' of these improvements with humans.
"A discontinuous break with the past requires lopsided capabilities development."
Example: "a chimpanzee could make an almost discontinuous jump to human level intelligence because it wasn't developing across the board. It was filling in a missing capability - symbolic intelligence - in an otherwise already very highly developed system. In other words, its starting point was staggeringly lopsided... [But] the lopsidedness is not occurring [in computers]. Obviously computer technology hasn't lagged in symbol processing - quite the contrary." (rwallace, The Curve of Capability)
Example: "Some species, such as humans, have mostly taken over the worlds of other species. The seeming reason for this is that there was virtually no sharing of the relevant information between species. In human society there is a lot of information sharing." (Katja Grace, How Far Can AI Jump?)
Response: It doesn't seem that symbol processing was the missing capability that made humans so powerful. Calculators have superior symbol processing, but have no power to rule the world. Also: many kinds of lopsidedness are occurring in computing technology that may allow a sudden discontinuous jump in AI abilities. In particular, we are amassing vast computational capacities without yet understanding the algorithmic keys to general intelligence.
"No small set of insights will lead to massive intelligence boost in AI."
Example: "...if there were a super mind theory that allowed vast mental efficiency gains all at once, but there isn’t. Minds are vast complex structures full of parts that depend intricately on each other, much like the citizens of a city. Minds, like cities, best improve gradually, because you just never know enough to manage a vast redesign of something with such complex inter-dependent adaptations." (Robin Hanson, Is the City-ularity Near?)
Example: "Now if you artificially hobble something so as to simultaneously reduce many of its capacities, then when you take away that limitation you may simultaneously improve a great many of its capabilities... But beyond removing artificial restrictions, it is very hard to simultaneously improve many diverse capacities. Theories that help you improve capabilities are usually focused on a relatively narrow range of abilities – very general and useful theories are quite rare." (Robin Hanson, The Betterness Explosion)
Response: An intelligence explosion doesn't require a breakthrough that improves all capabilities at once. Rather, it requires an AI capable of improving its intelligence in a variety of ways. Then it can use the advantages of mere digitality (speed, copyability, goal coordination, etc.) to improve its intelligence in dozens or thousands of ways relatively quickly.
To be added:
- Massimo Pigliucci on Chalmers' Singularity talk
- XiXiDu on intelligence explosion as a disjunctive or conjunctive event, on intelligence explosion as a low-priority global risk, on basic AI drives
- Diminishing returns from intelligence amplification
Crocker’s rules declared, because I expect this may agitate some people:
I accept (1) and (3). Where I depart somewhat from the LW consensus is in the belief that anyone is going to accept the idea that the singularity (in its intelligence explosion form) should go ahead, without some important intervening stages that are likely to last for longer than 150 years.
CEV is a bad idea. I am sympathetic towards the mindset of the people who advocate it, but even I would be in the pitchfork-wielding gang if it looked like someone was actually going to implement it. Try to imagine that this was actually going to happen next year, rather than being a fun thing discussed on an internet forum – beware far mode bias. To quote Robin Hanson in a recent OB post:
I don’t trust fallible human programmers to implement soundly “knowing more”, “thinking faster” and “growing up together”, and deal with the problems of “muddle”, “spread” and “distance”. The idea of a “last judge” as a safety measure seems like a sticking plaster on a gaping wound. Neither do I accept that including all of humanity is anything other than misplaced idealism. Some people seem to think that even a faulty CEV initial dynamic magically corrects itself into a good one; that might happen, but not with nearly a high enough probability.
Another problem that has been scarcely discussed: what happens if, as Eliezer’s CEV document suggests might happen, the thing shuts itself down or the last judge decides it isn’t safe? And the second time we try it, too?
But the problem remains that a superintelligence needs a full set of human values in order for it to be safe. I don’t see any other tenable proposals for implementing this apart from CEV, therefore I conclude that building a recursively improving superintelligence is basically just unsafe, given present human competence levels. Given that fact, to conclude that because we are likely to obtain the means to bring about a (positive or negative) singularity at some point we cannot prevent it from happening indefinitely is like saying that because we possess nuclear technology we can’t prevent a nuclear extinction event from happening indefinitely. If FAI is an “impossible” challenge and NAI (No AI) is merely very difficult, there is something to recommend NAI.
That doesn’t mean to say that I disprove of what Eliezer et al are doing. The singularity is definitely an extremely important thing to be discussing. I just think that the end product is likely to be widespread recognition of the peril of playing around with AI, and this (along with appropriately severe action taken to reduce the peril) is just as much a solution to Yudkowsky’s fear that a bunch of above-average AI scientists can “learn from each other’s partial successes and accumulate hacks as a community” as is trying to beat them to the punch by rushing to create a positive singularity.
Although this is unfair there is probably some truth to the idea that people who devote their lives to studying AI and the intelligence explosion are likely to be biased towards solutions to the problem in which their work achieves something really positive, rather than merely acting as a warning. That is not to pre-judge the issue, but merely to recommend that a little more skepticism than normal is due.
On the other hand there is another tenable approach to the singularity that is less widely recognised here. Wei Dai’s posts here and here seem very sensible to me; he suggests that intelligence enhancement should have priority over FAI research:
He quotes Eliezer as having said this (from pages 31-35 here):
Wei Dai points out that it is worth distinguishing the ease of creating uFAI in comparison to FAI, rather than lumping these together as “AI”.
I also think that the difference in outcomes between “deliberately not working on Friendly AI” and “treating unsupervised AI work as a terrible crime” are worth distinguishing.
This depends on the probability one assigns to CEV working. My probability that it would work given present human competence levels is low, and my probability that anyone would actually let it happen is very low.
The benefit of intelligence enhancement is that changes can be as unhurried and incremental as one likes (assuming that the risk of someone building uFAI is not considered to be imminent, due to stringent security measures); CEV is more a leap of faith.
Seconded - I'd like to see some material from lukeprog or somebody else at SI addressing these kinds of concerns. A "Criticisms of CEV" page maybe?
[Edit: just to clarify, I wasn't seconding the part about the pitchforks and I'm not sure that either IA or an AGI ban is an obviously better strategy. But I agree with everything else here]