Humans may create human-level artificial intelligence in this century.1
This sentence says so little. "Many experts believe humans may create human-level artificial intelligence as early as this century."
Shortly thereafter, we may see an “intelligence explosion” — a chain of events by which human-level AI
Consider defining "AI" here, at its first usage.
As Chalmers (2010) notes, the singularity is of great practical interest:
This is the first time in this snippet that a singularity is mentioned, so it should be defined (unless the entire book is about the subject, etc.). Also, "a singularity" sounds better than "the singularity."
"As Chalmers (2010) notes, a singularity caused by an intelligence explosion would be of great practical interest:
"the singularity is of great practical interest," is dark arts, "the singularity would be of great practical interest," "'the singularity' is of great practical interest."
long-standing philosophical puzzles
This is dark arts. "Puzzles" implicitly have solutions, not that I don't believe these things do.
"In science, the development of AI will require progress in several of mankind's grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)"
No one knows what science doesn't know. Maybe it won't. This is well put: "while the development of AI safety mechanisms may require progress..."
"In science, experts believe the development of AI would require progress in several of mankind's grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011)"
and developing artificial minds (Nilsson 2010)
This seems like a tautology. What is meant specifically here?
and the cognitive science of human values
or the cognitive science of human values
most science would be done
This is a complicated, unspecific claim. Would it be correct to say "most design of scientific experiments," or something specific?
Our aim, then, is not to provide detailed arguments but only to sketch the issues involved, pointing the reader to authors who have analyzed each component in more detail.
Unwieldy.
our discussion of them must be permitted to begin
"our discussion of them begins"
Because the singularity is associated
"Because 'the singularity' is associated"
In doing so,
"Were we to do so,"
For example
It seems like "example" refers back to the fallacy, as if you were giving an example of committing the fallacy.
"Therefore, among other things"
rather than decelerate
By not saying "rather than decelerate or remain unchanged," or similar, you weakly imply you are assuming it will decelerate. It's not clear that if this is so, it is to invoke LCPW on yourself.
is one of them.
"is one such convergent outcome."
are not objections to the singularity.
"a singularity."
human-level AI (hereafter, "AI")
"human-level general artificial intelligence (hereafter, 'AI')"
consider which actions we can take now to shape our future.
I don't like this because of its literal meaninglessness. I can see why others might not mind. Most actions shape the relevant future in some way, what you really mean is "consider which actions we should take now to shape our future," or "consider how actions taken now would shape our future."
concerning whether we should expect the creation of AI within a century.
"concerning when we should expect the first creation of AI."
is not an AI but merely a "narrow AI"
"is not an AI but merely a 'narrow artificial intelligence,'" recall "human-level AI (hereafter, 'AI')"
By "AI," we refer to
There should probably be only one defining sentence. At the very least this should be changed to "By 'human-level artificial intelligence,' we refer to," since you already defined AI in terms of human-level artificial intelligence.
There are many types of AI.
"There are many different proposed methods of creating AIs."
Because whole brain emulation will rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an "engineering" problem, and thus more predictable than other kinds of AI.
Only the date after which it can be expected to have occurred without new technologies is much more predictable. The earliest it can happen isn't too much more so.
"Because whole brain emulation seems possible by scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an "engineering" problem, and thus the latest it can be expected to have occurred by may be more predictable than the latest other kinds of AI can be expected to have occurred by."
We do not know what it takes to build de novo AI. Because of this, we do not know what groundwork will be needed to understand general intelligence
This is only true for transparent AI. Recall we may create opaque AI before understanding general intelligence; if we did we still might not know what groundwork would be needed.
overconfident of our predictions
"overconfident in" is more common and sounds slightly better.
So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.
I don't like this sentence. Perhaps:
"In line with these studies, experts in fields related to AI disagree about when it might first be created [citations]. So if you have a strong opinion, even an expert and informed one, about when digital intelligence will or won't arrive before, bear in mind that most people with similar expertise and conviction are wrong."
Should you plan as though...confident...Or is your estimate
These are not parallel. "Estimate" and "confident" are usually used to mean a raw, system 1 mental output.
You either will or will not...encourage WBE development...The outcomes of these choices will depend, among other things, on whether AI is created in the near future.
It is confusing to discuss the outcome of encouraging something as depending on whether or not it is created. Perhaps something like: "The future value of your current choices will depend, among other things, on whether or not AI is created in the near future. In the face of uncertainty, you must choose whether or not to encourage WBE development, support AI risk reduction, etc. Even personal decisions, such as how much to save for retirement, depend on what you expect the future will look like."
If we can't use our intuitions for prediction or defer to experts how might one estimate the time until AI?
You went from "we" to "one" within that sentence. ~~~~~~~~~~~~~~~
Why the focus on this century? Is this century a theme uniting the book this will be a chapter in?
My feedback, much of which has been incorporated into the latest draft:
Moreover, because the "singularity" term is popularly associated with several claims and approaches we will not defend, we must explain what we are not claiming.
Obvious ref here to Yudkowsky's schools essay; better would be an academic link, has his typology appeared anywhere academic yet?
pg 4
overconfident of our predictions
Nitpick, that sounds really weird to my ears although I am not sure it is actually ungrammatical. I suggested deleting 'of our predictions' entirely.
Estimating progress in scientific research output. Imagine a man digging a ten-kilometer ditch. If he digs 100 meters in one day, you might predict the ditch will be finished in 100 days. But what if 20 more diggers join him, and they are all given amphetamines? Now the ditch might not take so long.
Nootropics are not very impressive; strongly suggest changing the metaphor to something involving backhoes, which is fair - as AI software is developed in various dimensions, it can be applied to the task of further output (recursively, per later section 'Accelerated science').
Several factors may decelerate our progress toward the first creation of AI. For example:...Global totalitarianism. [paragraph]
Well hey, maybe my chip fab/bomber suggestion is not so useless after all.
Quantum computing may also emerge during this period. Early worries that quantum computing may not be feasible have been overcome, but it is hard to predict whether quantum computing will contribute significantly to the development of digital intelligence because progress in quantum computing depends heavily on unpredictable insights in quantum algorithms (Rieffel and Polak 2011).
Should mention that as things stand, there aren't any known algorithms that would directly speed up an AI. Look at https://en.wikipedia.org/wiki/Quantum_algorithm - none of the speedups on concrete algorithms are really useful. Shor's algorithm would be useful to a rogue AI cracking security; Grover's algorithm is a nice speedup, but database lookup/list search is well optimized classically and it's hard to imagine a quantum computer with enough qubits to search a useful list and likewise for quantum counting or element counting. Quantum simulation seems like the one exception.
Datasets are expected to increase greatly in the coming decades.
Worth mentioning kryder's law and projections outpacing Moore's law: http://www.dssc.ece.cmu.edu/research/pdfs/After_Hard_Drives.pdf Kryder, Mark H.; Chang Soo Kim (October 2009). "After Hard Drives - What Comes Next?" (PDF). IEEE Transactions on Magnetics 45 (10). doi:10.1109/TMAG.2009.2024163
At age 8, Terrence Tao scored 760 on the math SAT, one of only [2?3?] children ever to do this at such an age; he later went on to [have a lot of impact on math].
I tried to find the first one, but couldn't find that exact one. What I did find was
... INTRODUCTION Terence Tao is the oldest of three children. ... Terry was the first eight year old ever to score 760 (out of a possible 800) on the SAT-M. Only 1~ of college bound 17 and 18 year o!ds in the United States attain a score of 750 or more. ... http://www.springerlink.com/content/gr62714555714348/
(No academic access, couldn't find it jailbroken.) Which is as good a claim, I think.
I don't think the latter really needs much referencing - just say he won a Fields Medal and move on.
will soon make it feasible to compare the characteristics of an entire population of adults with those adults’ full genomes, and, thereby, to unravel the heritable components of intelligence, dilligence, and other contributors to scientific achievement.
Counter-argument: http://www.ncbi.nlm.nih.gov/pubmed/21826061 http://blogs.discovermagazine.com/gnxp/2011/08/half-the-variation-in-i-q-is-due-to-genes/ Intelligence is highly hereditable but it's spread over so many alleles and interactions that embryo selection gets only a few points and one would have to edit half the genome to get to, say, 2 standard deviations above the norm.
The digital intelligence could conceivably edit its own code while it is running, but it could also create a new intelligence that runs independently.
Lots of potential citations here. All of Schmidhuber's Godel machine papers come to mind, as does classic AI using Lisp and Smalltalk to generate and run code (eg. Automated Mathematician and Eurisko).
[concluding paragraph]
May I suggest a Churchill quote I recently found? The final line, specifically:
"So now the Admiralty wireless whispers through the ether to the tall masts of ships, and captains pace their decks absorbed in thought. It is nothing. It is less than nothing. It is too foolish, too fantastic to be thought of in the twentieth century. Or is it fire and murder leaping out of the darkness at our throats, torpedoes ripping the bellies of half-awakened ships, a sunrise on a vanished naval supremacy, and an island well-guarded hitherto, at last defenceless? No, it is nothing. No one would do such things. Civilization has climbed above such perils. The interdependence of nations in trade and traffic, the sense of public law, the Hague Convention, Liberal principles, the Labour Party, high finance, Christian charity, common sense have rendered such nightmares impossible. Are you quite sure? It would be a pity to be wrong. Such a mistake could only be made once—once for all."
--Winston Churchill, 1923, recalling the possibility of war between France and Germany after the Agadir Crisis of 1911, in The World Crisis,vol. 1, 1911-1914, pp. 48-49
This feels like a major improvement over the previous version. (Also, you revealed where this will be published, which makes this article even more awesome.)
Didn't mean to keep it a secret. For everyone else, this is being written for The Singularity Hypothesis from Springer's Frontiers Collection.
Second, we will not assume that human intelligence is realized by a classical computation system, nor that intelligent machines will have internal mental properties like consciousness or "understanding." Such factors are mostly irrelevant to the occurrence of a singularity, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; Van Gelder and Port 1995) are not objections to the singularity.
Zing.
There are many types of AI. To name just three:
Why these three? I mean, whole brain emulation AIs look like a pretty strict subset of 'opaque' AIs.
Williams 2011 prediction markets theory and applications
Does that cover the long-term prediction problems with prediction markets like no interest on deposits?
The creation of AI would also revolutionize scientific method, as most science would be done by intelligent machines (Sparkes et al. 2010).
typo
prediction markets have not yet been tested much for technological forecasting (Williams 2011)
And a priori, they would seem unlikely to be reliable given that technological events would affect how valuable the payments are. Trivial example: I would bet against the development of UFAI at any odds, despite not being at all close to sure that UFAI will not be developed.
We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
This sounded odd/unnecessary/out of place to me. I think it flows just fine if you omit this sentence.
I've been writing a new draft of the intelligence explosion analysis I'm writing with Anna Salamon. I've incorporated much of the feedback LWers have given me, and will now present snippets of the new draft for feedback. Please ignore the formatting issues caused by moving the text from Google Docs to Less Wrong.
_____
Intelligence Explosion: Evidence and Import
Anna Salamon
Luke Muehlhauser
The best answer to the question, "Will computers ever be as smart as humans?" is probably “Yes, but only briefly."
Vernor Vinge
Humans may create human-level artificial intelligence in this century.1 Shortly thereafter, we may see an “intelligence explosion” — a chain of events by which human-level AI leads, fairly rapidly, to intelligent systems whose capabilities far surpass those of biological humanity as a whole.
How likely is this, and what should we do about it? Others have discussed these questions previously (Turing 1950; Good 1965; Von Neumann 1966; Solomonoff 1985; Vinge 1993; Bostrom 2003; Yudkowsky 2008; Chalmers 2010), but no brief, systematic review of the relevant issues has been published. In this chapter we aim to provide such a review.
Why study intelligence explosion?
As Chalmers (2010) notes, the singularity is of great practical interest:
If there is a singularity, it will be one of the most important events in the history of the planet. An intelligence explosion has enormous potential benefits: a cure for all known diseases, an end to poverty, extraordinary scientific advances, and much more. It also has enormous potential dangers: an end to the human race, an arms race of warring machines, the power to destroy the planet...
The singularity is also a challenging scientific and philosophical topic. Under the spectre of intelligence explosion, long-standing philosophical puzzles about values, other minds, and personal identity become, as Chalmers puts it, "life-or-death questions that may confront us in coming decades or centuries." In science, the development of AI will require progress in several of mankind's grandest scientific projects, including reverse-engineering the brain (Schierwagen 2011) and developing artificial minds (Nilsson 2010), while the development of AI safety mechanisms may require progress on the confinement problem in computer science (Lampson 1973; Yampolskiy 2011) and the cognitive science of human values (Muehlhauser and Helm, this volume). The creation of AI would also revolutionize scientific method, as most science would be done by intelligent machines (Sparkes et al. 2010).
Such questions are complicated, the future is uncertain, and our chapter is brief. Our aim, then, is not to provide detailed arguments but only to sketch the issues involved, pointing the reader to authors who have analyzed each component in more detail. We believe these matters are important, and our discussion of them must be permitted to begin at a low level because there is no other place to lay the first stones.
What we will (not) argue
"Technological singularity" has come to mean many things (Sandberg, this volume), including: accelerating technological change (Kurzweil 2005), a limit in our ability to predict the future (Vinge 1993), and the topic we will discuss: an intelligence explosion leading to the creation of machine superintelligence (Yudkowsky 1996). Because the singularity is associated with such a variety of views and arguments, we must clarify what this chapter will and will not argue.
First, we will not tell detailed stories about the future. In doing so, we would likely commit the “if and then” fallacy, by which an improbable conditional becomes a supposed actual (Nordmann 2007). For example, we will not assume the continuation of Moore’s law, nor that hardware trajectories determine software progress, nor that technological trends will be exponential rather than logistic (see Modis, this volume), nor indeed that progress will accelerate rather than decelerate (see Plebe and Perconti, this volume). Instead, we will examine convergent outcomes that — like the evolution of eyes or the emergence of markets — can come about through many different paths and can gather momentum once they begin. Humans tend to underestimate the likelihood of such convergent outcomes (Tversky and Kahneman 1974), and we believe intelligence explosion is one of them.
Second, we will not assume that human intelligence is realized by a classical computation system, nor that intelligent machines will have internal mental properties like consciousness or "understanding." Such factors are mostly irrelevant to the occurrence of a singularity, so objections to these claims (Lucas 1961; Dreyfus 1972; Searle 1980; Block 1981; Penrose 1994; Van Gelder and Port 1995) are not objections to the singularity.
What, then, will we argue? First, we suggest there is a significant probability we will create human-level AI (hereafter, "AI") within a century. Second, we suggest that AI is likely to lead rather quickly to machine superintelligence. Finally, we discuss the possible consequences of machine superintelligence and consider which actions we can take now to shape our future.
From here to AI
Our first step is to survey the evidence concerning whether we should expect the creation of AI within a century.
By "AI," we refer to "systems which match or exceed the cognitive performance of humans in virtually all domains of interest" (Shulman & Bostrom 2011). On this definition, IBM's Jeopardy!-playing Watson computer is not an AI but merely a "narrow AI" because it can only solve a narrow set of problems. Drop Watson in a pond or ask it to do original science, and it is helpless. Imagine instead a machine that can invent new technologies, manipulate humans with acquired social skills, and otherwise learn to navigate new environments as needed.
There are many types of AI. To name just three:
Whole brain emulation uses the human software for intelligence already invented by evolution, while other forms of AI ("de novo AI") require inventing intelligence anew, to varying degrees.
When should we expect AI? Unfortunately, expert elicitation methods have not proven useful for long-term forecasting,3 and prediction markets have not yet been tested much for technological forecasting (Williams 2011), so our analysis must allow for a wide range of outcomes. We will first consider how difficult the problem seems to be, and then which inputs toward solving the problem — and which "speed bumps" — we can expect in the next century.
How hard is whole brain emulation?
Because whole brain emulation will rely mostly on scaling up existing technologies like microscopy and large-scale cortical simulation, WBE may be largely an "engineering" problem, and thus more predictable than other kinds of AI.
Several authors have discussed the difficulty of WBE in detail (Sandberg and Bostrom 2008; de Garis et al. 2010; Modha et al. 2011; Cattell & Parker 2011). In short: The difficulty of WBE depends on many factors, and in particular on the resolution of emulation required for successful WBE. For example, proteome-resolution emulation will require more resources and technological development than emulation at the resolution of the brain's neural network. In perhaps the most likely scenario,
WBE on the neuronal/synaptic level requires relatively modest increases in microscopy resolution, a less trivial development of automation for scanning and image processing, a research push at the problem of inferring functional properties of neurons and synapses, and relatively business‐as‐usual development of computational neuroscience models and computer hardware.4
How hard is de novo AI?
There is a vast space of possible mind designs for de novo AI; talking about "non-human intelligence" is like talking about "non-platypus animals" (Dennett 1997; Pennachin and Goertzel 2007; Yudkowsky 2008).
We do not know what it takes to build de novo AI. Because of this, we do not know what groundwork will be needed to understand general intelligence, nor how long it may take to get there.
Worse, it’s easy to think we do know. Studies show that except for weather forecasters (Murphy and Winkler 1984), nearly all of us give inaccurate probability estimates when we try, and in particular we are overconfident of our predictions (Lichtenstein, Fischoff, and Phillips 1982; Griffin and Tversky 1992; Yates et al. 2002). Experts, too, often do little better than chance (Tetlock 2005), and are outperformed by crude computer algorithms (Grove and Meehl 1996; Grove et al. 2000; Tetlock 2005). So if you have a gut feeling about when digital intelligence will arrive, it is probably wrong.
But uncertainty is not a “get out of prediction free” card. You either will or will not save for retirement, encourage WBE development, or support AI risk reduction. The outcomes of these choices will depend, among other things, on whether AI is created in the near future. Should you plan as though there are 50/50 odds of achieving AI in the next 30 years? Are you 99% confident we won't create AI in the next 30 years? Or is your estimate somewhere in between?
If we can't use our intuitions for prediction or defer to experts how might one estimate the time until AI? We consider several strategies below.
[end of snippet]
Notes
1 Bainbridge 2006; Baum, Goertzel, and Goertzel 2011; Bostrom 2003; Legg 2008; Sandberg and Bostrom 2011.
3 Armstrong 1985; Woudenberg 1991; Rowe and Wright 2001. But, see Anderson and Anderson-Parente (2011).
4 Sandberg and Bostrom (2008), p. 83.
References
Modha et al. 2011, cognitive computing, communications of the ACM
Cattell & Parker 2011 challenges for brain emulation
Schierwagen 2011 Reverse engineering for biologically-inspired cognitive architectures
Floreano and Mattiussi 2008 bio-inspired artificial intelligence
de Garis et al. 2010 a world survey of artificial brain projects part 1
Nilsson 2010 the quest for artificial intelligence
Sparkes et al. 2010 Towards Robot Scientists for autonomous scientific discovery
Turing 1950 machine intelligence
Good 1965 speculations concerning the first ultraintelligent machine
Von Neumann 1966 theory of self-reproducing autonomata
Solomonoff 1985 the time scale of artificial intelligence
Vinge 1993 coming technological singularity
Bostrom 2003 ethical issues in advanced artificial intelligence
Yampolsky 2011 leakproofing the singularity
Lampson 1973 a note on the confinement problem
Yudkowsky 2008 artificial intelligence as a negative and positive factor in global risk
Chalmers 2010 the singularity a philosophical analysis
Schierwagen 2011 Reverse engineering for biologically-inspired cognitive architectures, a critical analysis
Kurzweil 2005 the singularity is near
Yudkowsky 1996 staring into the singularity
Nordmann 2007 If and then: a critique of speculative nanoethics
Tversky and Kahneman 1974 Judgment under uncertainty: Heuristics and biases
Lucas 1961 minds, machines, and godel
Dreyfus 1972 what computers can't do
Searle 1980 minds brains and programs
Block 1981 Psychologism and behaviorism
Penrose 1994 shadows of the mind
Van Gelder and Port 1995 It's about time, an overview of the dynamical approach to cognition
Shulman and Bostrom 2011 How hard is artificial intelligence
Sandberg and Bostrom 2008 whole brain emulation a roadmap
Williams 2011 prediction markets theory and applications
Dennett 1997 kinds of minds
Pennachin and Goertzel 2007 an overview of contemporary approaches to AGI
Murphy and Winkler 1984 probability forecasting in meteorology
Lichtenstein, Fischoff, and Phillips 1982 calibration of probabilities the state of the art to 1980
Griffin and Tversky 1992 The weighing of evidence and the determinants of confidence
Grove and Meehl 1996 Comparative Efficiency of Informal...
Grove et al. 2000 Clinical versus mechanical prediction: A meta-analysis
Yates, Lee, Sieck, Choi, Price 2002 Probability judgment across cultures
Tetlock 2005 expert political judgment
Bainbridge 2006 Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies...
Baum, Goertzel, and Goertzel 2011 How long until human-level AI?
Legg 2008 machine superintelligence
Sandberg & Bostrom 2011 machine intelligence survey
Sutton and Barto 1998 reinforcement learning an introduction
Hutter 2004 universal ai
Schmidhuber 2007 godel machines
Dewey 2011 learning what to value
Armstrong 1985 Long-Range Forecasting: From Crystal Ball to Computer, 2nd edition
Woudenberg 1991 an evaluation of delphi
Rowe and Wright 2001 expert opinions in forecasting
Anderson and Anderson-Parente 2011 A case study of long-term Delphi accuracy
Muehlhauser and Helm, this volume: The Singularity and Machine Ethics
Sandberg, this volume: models of technological singularity
Modis, this volume: there will be no singularity
Plebe and Perconti, this volume: the slowdown hypothesis