I've been writing a new draft of the intelligence explosion analysis I'm writing with Anna Salamon. I've incorporated much of the feedback LWers have given me, and will now present snippets of the new draft for feedback. Please ignore the formatting issues caused by moving the text from Google Docs to Less Wrong.
_____
Intelligence Explosion: Evidence and Import
Anna Salamon
Luke Muehlhauser
The best answer to the question, "Will computers ever be as smart as humans?" is probably “Yes, but only briefly."
Vernor Vinge
Humans may create human-level artificial intelligence in this century.1 Shortly thereafter, we may see an “intelligence explosion” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole.
How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Bostrom 2003; Yudkowsky 2008; Chalmers 2010), but no brief, systematic review of the relevant issues has been published. In this chapter we aim to provide such a review.
Why study intelligence explosion?
As Chalmers (2010) notes, the singularity is of great practical interest:
If there is a singularity, it will be one of the most important events in the history of the planet. An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet...
The singularity is also a challenging scientific and philosophical topic. Under the spectre of intelligence explosion, long-standing philosophical puzzles about values, other minds, and personal identity become, as Chalmers puts it, "life-or-death questions that may confront us in coming decades or centuries." In science, the development of AI will require progress in several of mankind's grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011) and developing artificial minds (Nilsson 2010), while the development of AI safety mechanisms may require progress on the confinement problem in computer science (Lampson 1973; Yampolskiy 2011) and the cognitive science of human values (Muehlhauser and Helm, this volume). The creation of AI would also revolutionize scientific method, as most science would be done by intelligent machines (Sparkes et al. 2010).
Such questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, is not to provide detailed arguments but only to sketch the issues involved, pointing the reader to authors who have analyzed each component in more detail. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
What we will (not) argue
"Technological singularity" has come to mean many things (Sandberg, this volume), including: accelerating technological change (Kurzweil 2005), a limit in our ability to predict the future (Vinge 1993), and the topic we will discuss: an intelligence explosion leading to the creation of machine superintelligence (Yudkowsky 1996). Because the singularity is associated with such a variety of views and arguments, we must clarify what this chapter will and will not argue.
First, we will not tell detailed stories about the future. In doing so, we would likely commit the “if and then” fallacy, by which an improbable conditional becomes a supposed actual (Nordmann 2007). For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress, nor that technological trends will be exponential rather than logistic (see Modis, this volume), nor indeed that progress will accelerate rather than decelerate (see Plebe and Perconti, this volume). Instead, we will examine convergent outcomes that — like the evolution of eyes or the emergence of markets — can come about through many different paths and can gather momentum once they begin. Humans tend to underestimate the likelihood of such convergent outcomes (Tversky and Kahneman 1974), and we believe intelligence explosion is one of them.
Second, we will not assume that human intelligence is realized by a classical computation system, nor that intelligent machines will have internal mental properties like consciousness or "understanding." Such factors are mostly irrelevant to the occurrence of a singularity, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; Van Gelder and Port 1995) are not objections to the singularity.
What, then, will we argue? First, we suggest there is a significant probability we will create human-level AI (hereafter, "AI") within a century. Second, we suggest that AI is likely to lead rather quickly to machine superintelligence. Finally, we discuss the possible consequences of machine superintelligence and consider which actions we can take now to shape our future.
From here to AI
Our first step is to survey the evidence concerning whether we should expect the creation of AI within a century.
By "AI," we refer to "systems which match or exceed the cognitive performance of humans in virtually all domains of interest" (Shulman & Bostrom 2011). On this definition, IBM's Jeopardy!-playing Watson computer is not an AI but merely a "narrow AI" because it can only solve a narrow set of problems. Drop Watson in a pond or ask it to do original science, and it is helpless. Imagine instead a machine that can invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate new environments as needed.
There are many types of AI. To name just three:
- The code of a transparent AI is written explicitly by, and largely understood by, its programmers.2
- An opaque AI is not transparent to its creators. For example it could be, like the human brain, a messy ensemble of cognitive modules. In an AI, these modules might be written by different teams for different purposes, using different languages and approaches.
- A whole brain emulation (WBE) is a computer emulation of the brain structures required to functionally reproduce human thought and perhaps consciousness (Sandberg and Bostrom 2008). We need not understand the detailed mechanisms of general intelligence to reproduce a brain functionally on a computing substrate.
Whole brain emulation uses the human software for intelligence already invented by evolution, while other forms of AI ("de novo AI") require inventing intelligence anew, to varying degrees.
When should we expect AI? Unfortunately, expert elicitation methods have not proven useful for long-term forecasting,3 and prediction markets have not yet been tested much for technological forecasting (Williams 2011), so our analysis must allow for a wide range of outcomes. We will first consider how difficult the problem seems to be, and then which inputs toward solving the problem — and which "speed bumps" — we can expect in the next century.
How hard is whole brain emulation?
Because whole brain emulation will rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an "engineering" problem, and thus more predictable than other kinds of AI.
Several authors have discussed the difficulty of WBE in detail (Sandberg and Bostrom 2008; de Garis et al. 2010; Modha et al. 2011; Cattell & Parker 2011). In short: The difficulty of WBE depends on many factors, and in particular on the resolution of emulation required for successful WBE. For example, proteome-resolution emulation will require more resources and technological development than emulation at the resolution of the brain's neural network. In perhaps the most likely scenario,
WBE on the neuronal/synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business‐as‐usual development of computational neuroscience models and computer hardware.4
How hard is de novo AI?
There is a vast space of possible mind designs for de novo AI; talking about "non-human intelligence" is like talking about "non-platypus animals" (Dennett 1997; Pennachin and Goertzel 2007; Yudkowsky 2008).
We do not know what it takes to build de novo AI. Because of this, we do not know what groundwork will be needed to understand general intelligence, nor how long it may take to get there.
Worse, it’s easy to think we do know. Studies show that except for weather forecasters (Murphy and Winkler 1984), nearly all of us give inaccurate probability estimates when we try, and in particular we are overconfident of our predictions (Lichtenstein, Fischoff, and Phillips 1982; Griffin and Tversky 1992; Yates et al. 2002). Experts, too, often do little better than chance (Tetlock 2005), and are outperformed by crude computer algorithms (Grove and Meehl 1996; Grove et al. 2000; Tetlock 2005). So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.
But uncertainty is not a “get out of prediction free” card. You either will or will not save for retirement, encourage WBE development, or support AI risk reduction. The outcomes of these choices will depend, among other things, on whether AI is created in the near future. Should you plan as though there are 50/50 odds of achieving AI in the next 30 years? Are you 99% confident we won't create AI in the next 30 years? Or is your estimate somewhere in between?
If we can't use our intuitions for prediction or defer to experts how might one estimate the time until AI? We consider several strategies below.
[end of snippet]
Notes
1 Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011.
3 Armstrong 1985; Woudenberg 1991; Rowe and Wright 2001. But, see Anderson and Anderson-Parente (2011).
4 Sandberg and Bostrom (2008), p. 83.
References
Modha et al. 2011, cognitive computing, communications of the ACM
Cattell & Parker 2011 challenges for brain emulation
Schierwagen 2011 Reverse engineering for biologically-inspired cognitive architectures
Floreano and Mattiussi 2008 bio-inspired artificial intelligence
de Garis et al. 2010 a world survey of artificial brain projects part 1
Nilsson 2010 the quest for artificial intelligence
Sparkes et al. 2010 Towards Robot Scientists for autonomous scientific discovery
Turing 1950 machine intelligence
Good 1965 speculations concerning the first ultraintelligent machine
Von Neumann 1966 theory of self-reproducing autonomata
Solomonoff 1985 the time scale of artificial intelligence
Vinge 1993 coming technological singularity
Bostrom 2003 ethical issues in advanced artificial intelligence
Yampolsky 2011 leakproofing the singularity
Lampson 1973 a note on the confinement problem
Yudkowsky 2008 artificial intelligence as a negative and positive factor in global risk
Chalmers 2010 the singularity a philosophical analysis
Schierwagen 2011 Reverse engineering for biologically-inspired cognitive architectures, a critical analysis
Kurzweil 2005 the singularity is near
Yudkowsky 1996 staring into the singularity
Nordmann 2007 If and then: a critique of speculative nanoethics
Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases
Lucas 1961 minds, machines, and godel
Dreyfus 1972 what computers can't do
Searle 1980 minds brains and programs
Block 1981 Psychologism and behaviorism
Penrose 1994 shadows of the mind
Van Gelder and Port 1995 It's about time, an overview of the dynamical approach to cognition
Shulman and Bostrom 2011 How hard is artificial intelligence
Sandberg and Bostrom 2008 whole brain emulation a roadmap
Williams 2011 prediction markets theory and applications
Dennett 1997 kinds of minds
Pennachin and Goertzel 2007 an overview of contemporary approaches to AGI
Murphy and Winkler 1984 probability forecasting in meteorology
Lichtenstein, Fischoff, and Phillips 1982 calibration of probabilities the state of the art to 1980
Griffin and Tversky 1992 The weighing of evidence and the determinants of confidence
Grove and Meehl 1996 Comparative Efficiency of Informal...
Grove et al. 2000 Clinical versus mechanical prediction: A meta-analysis
Yates, Lee, Sieck, Choi, Price 2002 Probability judgment across cultures
Tetlock 2005 expert political judgment
Bainbridge 2006 Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies...
Baum, Goertzel, and Goertzel 2011 How long until human-level AI?
Legg 2008 machine superintelligence
Sandberg & Bostrom 2011 machine intelligence survey
Sutton and Barto 1998 reinforcement learning an introduction
Hutter 2004 universal ai
Schmidhuber 2007 godel machines
Dewey 2011 learning what to value
Armstrong 1985 Long-Range Forecasting: From Crystal Ball to Computer, 2nd edition
Woudenberg 1991 an evaluation of delphi
Rowe and Wright 2001 expert opinions in forecasting
Anderson and Anderson-Parente 2011 A case study of long-term Delphi accuracy
Muehlhauser and Helm, this volume: The Singularity and Machine Ethics
Sandberg, this volume: models of technological singularity
Modis, this volume: there will be no singularity
Plebe and Perconti, this volume: the slowdown hypothesis
My feedback, much of which has been incorporated into the latest draft:
Obvious ref here to Yudkowsky's schools essay; better would be an academic link, has his typology appeared anywhere academic yet?
pg 4
Nitpick, that sounds really weird to my ears although I am not sure it is actually ungrammatical. I suggested deleting 'of our predictions' entirely.
Nootropics are not very impressive; strongly suggest changing the metaphor to something involving backhoes, which is fair - as AI software is developed in various dimensions, it can be applied to the task of further output (recursively, per later section 'Accelerated science').
Well hey, maybe my chip fab/bomber suggestion is not so useless after all.
Should mention that as things stand, there aren't any known algorithms that would directly speed up an AI. Look at https://en.wikipedia.org/wiki/Quantum_algorithm - none of the speedups on concrete algorithms are really useful. Shor's algorithm would be useful to a rogue AI cracking security; Grover's algorithm is a nice speedup, but database lookup/list search is well optimized classically and it's hard to imagine a quantum computer with enough qubits to search a useful list and likewise for quantum counting or element counting. Quantum simulation seems like the one exception.
Worth mentioning kryder's law and projections outpacing Moore's law: http://www.dssc.ece.cmu.edu/research/pdfs/After_Hard_Drives.pdf Kryder, Mark H.; Chang Soo Kim (October 2009). "After Hard Drives - What Comes Next?" (PDF). IEEE Transactions on Magnetics 45 (10). doi:10.1109/TMAG.2009.2024163
I tried to find the first one, but couldn't find that exact one. What I did find was
(No academic access, couldn't find it jailbroken.) Which is as good a claim, I think.
I don't think the latter really needs much referencing - just say he won a Fields Medal and move on.
Counter-argument: http://www.ncbi.nlm.nih.gov/pubmed/21826061 http://blogs.discovermagazine.com/gnxp/2011/08/half-the-variation-in-i-q-is-due-to-genes/ Intelligence is highly hereditable but it's spread over so many alleles and interactions that embryo selection gets only a few points and one would have to edit half the genome to get to, say, 2 standard deviations above the norm.
Lots of potential citations here. All of Schmidhuber's Godel machine papers come to mind, as does classic AI using Lisp and Smalltalk to generate and run code (eg. Automated Mathematician and Eurisko).
May I suggest a Churchill quote I recently found? The final line, specifically:
"So now the Admiralty wireless whispers through the ether to the tall masts of ships, and captains pace their decks absorbed in thought. It is nothing. It is less than nothing. It is too foolish, too fantastic to be thought of in the twentieth century. Or is it fire and murder leaping out of the darkness at our throats, torpedoes ripping the bellies of half-awakened ships, a sunrise on a vanished naval supremacy, and an island well-guarded hitherto, at last defenceless? No, it is nothing. No one would do such things. Civilization has climbed above such perils. The interdependence of nations in trade and traffic, the sense of public law, the Hague Convention, Liberal principles, the Labour Party, high finance, Christian charity, common sense have rendered such nightmares impossible. Are you quite sure? It would be a pity to be wrong. Such a mistake could only be made once—once for all."
--Winston Churchill, 1923, recalling the possibility of war between France and Germany after the Agadir Crisis of 1911, in The World Crisis,vol. 1, 1911-1914, pp. 48-49