I invite your feedback on this snippet from the forthcoming Friendly AI FAQ. This one is an answer to the question "What is the Singularity?"
_____
There are many types of mathematical and physical singularities, but in this FAQ we use the term 'Singularity' to refer to the technological singularity.
There are also many things someone might have in mind when they refer to a 'technological Singularity' (Sandberg 2010). Below, we’ll explain just three of them (Yudkowsky 2007):
- Intelligence explosion
- Event horizon
- Accelerating change
Intelligence explosion
Every year, computers surpass human abilities in new ways. A program written in 1956 was able to prove mathematical theorems, and found a more elegant proof for one of them than Russell and Whitehead had given in Principia Mathematica (MacKenzie 1995). By the late 1990s, 'expert systems' had surpassed human skill for a wide range of tasks (Nilsson 2009). In 1997, IBM's Deep Blue computer beat the world chess champion (Campbell et al. 2002), and in 2011 IBM's Watson computer beat the best human players at a much more complicated game: Jeopardy! (Markoff 2011). Recently, a robot named Adam was programmed with our scientific knowledge about yeast, then posed its own hypotheses, tested them, and assessed the results (King et al. 2009; King 2011).
Computers remain far short of human intelligence, but the resources that aid AI design are accumulating (including hardware, large datasets, neuroscience knowledge, and AI theory). We may one day design a machine that surpasses human skill at designing artificial intelligences. After that, this machine could improve its own intelligence faster and better than humans can, which would make it even more skilled at improving its own intelligence. This could continue in a positive feedback loop such that the machine quickly becomes vastly more intelligent than the smartest human being on Earth: an 'intelligence explosion' resulting in a machine superintelligence (Good 1965).
Event horizon
Vernor Vinge (1993) wrote that the arrival of machine superintelligence represents an 'event horizon' beyond which humans cannot model the future, because events beyond the Singularity will be stranger than science fiction: too weird for human minds to predict. So far, all social and technological progress has resulted from human brains, but humans cannot predict what future radically different and more powerful intelligences will create. He made an analogy to the event horizon of a black hole, beyond which the predictive power of physics at the gravitational singularity breaks down.
Accelerating Change
A third concept of technological singularity refers to accelerating change in technological development.
Ray Kurzweil (2005) has done the most to promote this idea. He suggests that although we expect linear technological change, information technological progress is exponential, and so the future will be more different than most of us expect. Technological progress enables even faster technological progress. Kurzweil suggests that technological progress may become so fast that humans cannot keep up unless they amplify their own intelligence by integrating themselves with machines.
Sources will be provided in the final document. Current list of sources for all sections of the FAI FAQ is below:
Allen, Varner, & Zinser (2000). Prolegomena to any future artificial moral agent. Journal of Experimental & Theoretical Artificial Intelligence, 12: 251-261.
Allen (2002). Calculated morality: Ethical computing in the limit. In I. Smit & G. Lasker (eds.), Cognitive, emotive and ethical aspects of decision making and human action, vol I. Baden/IIAS.
Allhoff, Lin, & Moore (2010). What is nanotechnology and why does it matter?. WIley-Blackwell.
Anderson & Anderson, eds. (2006). IEEE Intelligent Systems, 21(4).
Anderson & Anderson, eds. (2011). Machine Ethics. Cambridge University Press.
Azevedo, Carvalho, Grinberg, et al. (2009). Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain. The Journal of Comparative Neurology, 513: 532-41.
Bainbridge (2005). Survey of NBIC Applications. In Bainbridge & Roco (eds.), Managing nano-bio-info-cogno innovations: Converging technologies in society. Springer.
Barch & Carter (2005). Amphetamine improves cognitive function in medicated individuals with schizophrenia and in healthy volunteers. Schizophrenia Research, 77: 43–58.
Baum, Goertzel, & Goertzel (2011). How Long Until Human-Level AI? Results from an Expert Assessment. Technological Forecasting & Social Change, 78: 185-195.
Block (1981). Psychologism and behaviorism. Philosophical Review, 90: 5–43.
Bostrom (1998). How long before superintelligence? Intenational Journal of Future Studies, 2.
Bostrom (2003). Ethical issues in advanced artificial intelligence. In Smit, Lasker & Wallach (eds.), Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, vol II. IIAS, Windsor.
Bostrom & Cirkovic (2008). Global Catastrophic Risks. Oxford University Press.
Bostrom & Sandberg (2009). Cognitive enhancement: Methods, ethics, regulatory challenges. Science and Engineering Ethics, 15: 311–334.
Bostrom & Yudkowsky (2011). The ethics of artificial intelligence. In Ramsey & Frankish (eds.), The Cambridge Handbook of Artificial Intelligence.
Bruner, Shapiro, & Tagiuri. The Meaning of Traits in Isolation and in Combination. In Tagiuri & Petrullo (eds.), Person Perception and Interpersonal Behavior. Stanford University Press.
Butler (1863). Darwin among the machines. The Press (Cristchurch, New Zealand), June 13.
Caldwell, Caldwell, et al. (2000). A double-blind, placebo-controlled investigation of the efficacy of modafinil for sustaining the alertness and performance of aviators: A helicopter simulator study. Psychopharmacology (Berlin), 150: 272–282.
Campbell (1932). The Last Evolution. Amazing Stories.
Campbell, Hoane, & Hsu (2002). Deep Blue. Artificial Intelligence, 134: 57-83.
Capurro, Hausmanninger, Weber, Weil, Cerqui, Weber, & Weber (2006). International Review of Information Ethics, Vol. 6: Ethics in Robots.
Chico, Benedict, Louie, & Cohen (1996). Quantum conductance of carbon nanotubes with defecs. Physical Review B, 54: 2600-2606.
Clarke (1968). The mind of the machine. Playboy, December 1968.
Daley & Onwuegbuzie (2011). Race and intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 293-307). Cambridge University Press.
Danielson (1992). Artificial morality: Virtuous robots for virtual games. Routledge.
Davidson & Kemp (2011). Contemporary models of intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 58-83). Cambridge University Press.
Drexler (1987). Engines of Creation. Anchor.
Dreyfus (1972). What Computers Can’t Do. Harper & Row.
Eden, Soraker, Moor, & Steinhart, eds. (2012). The Singularity Hypothesis: A Scientific and Philosophical Assessment. Springer.
Finke, Dodds, et al. (2010). Effects of modafinil and methylphenidate on visual attention capacity: a TVA-based study. Psychopharmacology, 210: 317-329.
Floridi & Sanders (2004). On the morality of artificial agents. Minds and Machines, 14: 349-379.
Gibbs & D’Esposito (2005). Individual capacity differences predict working memory performance and prefrontal activity following dopamine receptor stimulation. Cognitive & Affective Behavioral Neuroscience, 5: 212–221.
Gill, Haerich, et al. (2006). Cognitive performance following modafinil versus placebo in sleep-deprived emergency physicians: A double-blind randomized crossover study. Academic Emergency Medicine, 13: 158–165.
Gilovich, Griffith, & Kahneman, eds. (2002). Heuristics and Biases: The Psychology of Human Judgment. Cambridge University Press.
Glimcher (2010). Foundations of Neuroeconomic Analysis. Oxford University Press.
Gödel (1931). Uber formal unentscheidbare Satze der Principia Mathematica und verwandter Systeme I. Monatshefte fur Mathematik und Physik, 38: 173–198.
Goertzel & Pennachin (2007). Artificial General Intelligence. Springer.
Good (1965). Speculations concerning the first ultraintelligent machine. Advanced in Computers, 6: 31-88.
Graimann, Allison, & Pfurtscheller (2011). Brain-Computer Interfaces: Revolutionizing Human-Computer Interaction. Springer.
Gron, Kirstein, et al. (2005). Cholinergic enhancement of episodic memory in healthy young adults. Psychopharmacology (Berlin), 182: 170–179.
Hall (2000). Ethics for machines.
Halpern, Beninger, & Straight (2011). Sex differences in intelligence. In Sternberg & Kaufman (eds.), The Cambridge Handbook of Intelligence (pp. 253-272). Cambridge University Press.
Hanson (1994). If uploads come first: The crack of a future dawn. Extropy, 6:2.
Hanson (2008). Economics of the singularity. IEEE Spectrum, June: 37‐42.
Hochberg, Serruya, Friehs, Mukand, Saleh, Caplan, Branner, Chen, Penn, & Donoghue (2006). Neuronal ensemble control of prosthetic devices by a human with tetraplegia. Nature, 442: 164-171.
PART TWO:
Johnston (2004). Healthy, wealthy and wise? A review of the wider benefits of education. Report 04/
Kandel, Schwartz, and Jessell (2000). Principles of neural science, 4th edition. McGraw-Hill.
Kimberg, D’Esposito, & Farah (1997). Effects of bromocriptine on human subjects depend on working memory capacity. Neuroreport, 8: 3581–3585.
Kimberg, Aguirre, et al. (2001). Cortical effects of bromocriptine, a D-2 dopamine receptor agonist, in human subjects, revealed by fMRI. Human Brain Mapping, 12: 246–257.
Kimberg & D’Espo... (read more)