Myself, Kaj Sotala and Seán ÓhÉigeartaigh recently submitted a paper entitled "The errors, insights and lessons of famous AI predictions and what they mean for the future" to the conference proceedings of the AGI12/AGI Impacts Winter Intelligenceconference. Sharp deadlines prevented us from following the ideal procedure of first presenting it here and getting feedback; instead, we'll present it here after the fact.

The prediction classification shemas can be found in the first case study.

Note this is very similar to this post, and is mainly reposted for completeness.

How well have the ''Spiritual Machines'' aged?

  • Classification: timelines and scenarios, using expert judgementcausal modelsnon-causal models and (indirect) philosophical arguments.

Ray Kurzweil is a prominent and often quoted AI predictor. One of his most important books was the 1999 ''The Age of Spiritual Machines'' (Kur99) which presented his futurist ideas in more detail, and made several predictions for the years 2009, 2019, 2029 and 2099. That book will be the focus of this case study, ignoring his more recent work (a correct prediction in 1999 for 2009 is much more impressive than a correct 2008 reinterpretation or clarification of that prediction). There are five main points relevant to judging ''The Age of Spiritual Machines'': Kurzweil's expertise, his 'Law of Accelerating Returns', his extension of Moore's law, his predictive track record, and his use of fictional imagery to argue philosophical points.

Kurzweil has had a lot of experience in the modern computer industry. He's an inventor, computer engineer, and entrepreneur, and as such can claim insider experience in the development of new computer technology. He has been directly involved in narrow AI projects covering voice recognition, text recognition and electronic trading. His fame and prominence are further indications of the allure (though not necessarily the accuracy) of his ideas. In total, Kurzweil can be regarded as an AI expert.

Kurzweil is not, however, a cosmologist or an evolutionary biologist. In his book, he proposed a 'Law of Accelerating Returns'. This law claimed to explain many disparate phenomena, such as the speed and trends of evolution of life forms, the evolution of technology, the creation of computers, and Moore's law in computing. His slightly more general 'Law of Time and Chaos' extended his model to explain the history of the universe or the development of an organism. It is a causal model, as it aims to explain these phenomena, not simply note the trends. Hence it is a timeline prediction, based on a causal model that makes use of the outside view to group the categories together, and is backed by non-expert opinion.

A literature search failed to find any evolutionary biologist or cosmologist stating their agreement with these laws. Indeed there has been little academic work on them at all, and what work there is tends to be critical.

The laws are ideal candidates for counterfactual resiliency checks, however. It is not hard to create counterfactuals that shift the timelines underlying the laws (see this for a more detailed version of the counterfactual resiliency check). Many standard phenomena could have delayed the evolution of life on Earth for millions or billions of years (meteor impacts, solar energy fluctuations or nearby gamma-ray bursts). The evolution of technology can similarly be accelerated or slowed down by changes in human society and in the availability of raw materials - it is perfectly conceivable that, for instance, the ancient Greeks could have started a small industrial revolution, or that the European nations could have collapsed before the Renaissance due to a second and more virulent Black Death (or even a slightly different political structure in Italy). Population fragmentation and decrease can lead to technology loss (such as the 'Tasmanian technology trap' (Riv12)). Hence accepting that a Law of Accelerating Returns determines the pace of technological and evolutionary change, means rejecting many generally accepted theories of planetary dynamics, evolution and societal development. Since Kurzweil is the non-expert here, his law is almost certainly in error, and best seen as a literary device rather than a valid scientific theory.

If the Law is restricted to being a non-causal model of current computational development, then the picture is very different. Firstly because this is much closer to Kurzweil's domain of expertise. Secondly because it is now much more robust to counterfactual resiliency. Just as in the analysis of Moore's law, there are few plausible counterfactuals in which humanity had continued as a technological civilization for the last fifty years, but computing hadn't followed various exponential curves. Moore's law has been maintained across transitions to new and different substrates, from transistors to GPUs, so knocking away any given technology or idea seems unlikely to derail it. There is no consensus as to why Moore's law actually works, which is another reason it's so hard to break, even counterfactually.

Moore's law and its analogues (Moo65,Wal05) are non-causal models, backed up strongly by the data and resilient to reasonable counterfactuals. Kurzweil's predictions are mainly based around grouping these laws together (outside view) and projecting them forwards into the future. This is combined with Kurzweil's claims that he can estimate how those continuing technological innovations are going to become integrated into society. These timeline predictions are thus based strongly on Kurzweil's expert judgement. But much better than subjective impressions of expertise, is Kurzweil's track record: his predictions for 2009. This gives empirical evidence as to his predictive quality.

Initial assessments suggested that Kurzweil had a success rate around 50%. A panel of nine volunteers were recruited to give independent assessments of Kurzweil's performance. Kurzweil's predictions were broken into 172 individual statements, and the volunteers were given a randomised list of numbers from 1 to 172, with instructions to work their way down the list in that order, estimating each prediction as best they could. Since 2009 was obviously a 'ten year from 1999' gimmick, there was some flexibility on the date: a prediction was judged true if it was true by 2011. Emphasis was placed on the fact that the predictions had to be useful to a person in 1999 planning their future, not simply impressive to a person in 2009 looking back at the past}.

531 assessments were made, an average of exactly 59 assessments per volunteer. Each volunteer assessed at least 10 predictions, while one volunteer assessed all 172. Of the assessments, 146 (27%) were found to be true, 82 (15%) weakly true, 73 (14%) weakly false, 172 (32%) false, and 58 (11%) could not be classified (see Figure). The results are little changed (≈±1%) if the results are calculated for each volunteer, and then averaged). Simultaneously, a separate assessment was made using volunteers on the site Youtopia. These found a much higher failure rate - 41% false, 16% weakly false -- but since the experiment wasn't blinded or randomised, it is of less rigour.

 

These nine volunteers thus found a correct prediction rate of 42%. How impressive this result is depends on how specific and unobvious Kurzweil's predictions were. This is very difficult to figure out, especially in hindsight (Fis75). Nevertheless, a subjective overview suggests that the predictions were often quite specific (e.g.''Unused computes on the Internet are being harvested, creating virtual parallel supercomputers with human brain hardware capacity''), and sometimes failed because of this. In view of this, a correctness rating of 42% is impressive, and goes some way to demonstrate Kurzweil's predictive abilities.

When it comes to self-assessment (commissioned assessments must also be taken as self-assessments, unless there are strong reasons to suppose independence of the assessor), however, Kurzweil is much less impressive. He commissioned investigations into his own performance, which gave him scores of 102 out of 108 or 127 out of 147, with the caveat that ''even the predictions that were considered wrong [...] were not all wrong.'' This is dramatically different from this paper's assessments.

What can be deduced from this tension between good performance and poor self-assessment? The performance is a validation of Kurzweil's main model: continuing exponential trends in computer technology, and confirmation that Kurzweil has some impressive ability to project how these trends will impact the world. However, it does not vindicate Kurzweil as a predictor per se - his self-assessment implies that he does not make good use of feedback. Thus one should probably pay more attention to Kurzweil's model, than to his subjective judgement. This is a common finding in expert tasks - experts are often better at constructing predictive models than at making predictions themselves (Kah11).

'The Age of Spiritual Machines' is not simply a dry tome, listing predictions and arguments. It is also, to a large extent, a story, which includes a conversation with a hypothetical future human called Molly discussing her experiences through the coming century and its changes. Can one extract verifiable predictions from this aspect of the book?

A story is neither a prediction nor evidence for some particular future. But the reactions of characters in the story can be construed as a scenario prediction. They imply that real humans, placed in those hypothetical situations, will react in the way described. Kurzweil's story ultimate ends with humans merging with machines - with the barrier between human intelligence and artificial intelligence being erased. Along the way, he describes the interactions between humans and machines, imagining the machines quite different from humans, but still being perceived to have human feelings.

One can extract two falsifiable future predictions from this: first, that humans will perceive feelings in AIs, even if they are not human-like. Secondly, that humans and AIs will be able to relate to each other socially over the long term, despite being quite different, and that this social interaction will form the main glue keeping the mixed society together. The first prediction seems quite solid: humans have anthropomorphised trees, clouds, rock formations and storms, and have become convinced that chatterbots were sentient (eli66). The second prediction is more controversial: it has been argued that an AI will be such an alien mind that social pressures and structures designed for humans will be completely unsuited to controlling it (Bos13,Arm,ASB12). Determining whether social structures can control dangerous AI behaviour, as it controls dangerous human behaviour, is a very important factor in deciding whether AIs will ultimately be safe or dangerous. Hence analysing this story-based prediction is an important area of future research.

References:

  • [Arm] Stuart Armstrong. General purpose intelligence: arguing the orthogonality thesis. In preparation.
  • [ASB12] Stuart Armstrong, Anders Sandberg, and Nick Bostrom. Thinking inside the box: Controlling and using an oracle ai. Minds and Machines, 22:299-324, 2012.
  • [BBJ+03] S. Bleich, B. Bandelow, K. Javaheripour, A. Muller, D. Degner, J. Wilhelm, U. Havemann-Reinecke, W. Sperling, E. Ruther, and J. Kornhuber. Hyperhomocysteinemia as a new risk factor for brain shrinkage in patients with alcoholism. Neuroscience Letters, 335:179-182, 2003.
  • [Bos13] Nick Bostrom. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. forthcoming in Minds and Machines, 2013.
  • [Cre93] Daniel Crevier. AI: The Tumultuous Search for Artificial Intelligence. NY: BasicBooks, New York, 1993.
  • [Den91] Daniel Dennett. Consciousness Explained. Little, Brown and Co., 1991.
  • [Deu12] D. Deutsch. The very laws of physics imply that artificial intelligence must be possible. what's holding us up? Aeon, 2012.
  • [Dre65] Hubert Dreyfus. Alchemy and ai. RAND Corporation, 1965.
  • [eli66] Eliza-a computer program for the study of natural language communication between man and machine. Communications of the ACM, 9:36-45, 1966.
  • [Fis75] Baruch Fischho . Hindsight is not equal to foresight: The effect of outcome knowledge on judgment under uncertainty. Journal of Experimental Psychology: Human Perception and Performance, 1:288-299, 1975.
  • [Gui11] Erico Guizzo. IBM's Watson jeopardy computer shuts down humans in final game. IEEE Spectrum, 17, 2011.
  • [Hal11] J. Hall. Further reflections on the timescale of ai. In Solomonoff 85th Memorial Conference, 2011.
  • [Han94] R. Hanson. What if uploads come first: The crack of a future dawn. Extropy, 6(2), 1994.
  • [Har01] S. Harnad. What's wrong and right about Searle's Chinese room argument? In M. Bishop and J. Preston, editors, Essays on Searle's Chinese Room Argument. Oxford University Press, 2001.
  • [Hau85] John Haugeland. Artificial Intelligence: The Very Idea. MIT Press, Cambridge, Mass., 1985.
  • [Hof62] Richard Hofstadter. Anti-intellectualism in American Life. 1962.
  • [Kah11] D. Kahneman. Thinking, Fast and Slow. Farra, Straus and Giroux, 2011.
  • [KL93] Daniel Kahneman and Dan Lovallo. Timid choices and bold forecasts: A cognitive perspective on risk taking. Management science, 39:17-31, 1993.
  • [Kur99] R. Kurzweil. The Age of Spiritual Machines: When Computers Exceed Human Intelligence. Viking Adult, 1999.
  • [McC79] J. McCarthy. Ascribing mental qualities to machines. In M. Ringle, editor, Philosophical Perspectives in Artificial Intelligence. Harvester Press, 1979.
  • [McC04] Pamela McCorduck. Machines Who Think. A. K. Peters, Ltd., Natick, MA, 2004.
  • [Min84] Marvin Minsky. Afterword to Vernor Vinges novel, "True names." Unpublished manuscript. 1984.
  • [Moo65] G. Moore. Cramming more components onto integrated circuits. Electronics, 38(8), 1965.
  • [Omo08] Stephen M. Omohundro. The basic ai drives. Frontiers in Artificial Intelligence and applications, 171:483-492, 2008.
  • [Pop] Karl Popper. The Logic of Scientific Discovery. Mohr Siebeck.
  • [Rey86] G. Rey. What's really going on in Searle's Chinese room". Philosophical Studies, 50:169-185, 1986.
  • [Riv12] William Halse Rivers. The disappearance of useful arts. Helsingfors, 1912.
  • [San08] A. Sandberg. Whole brain emulations: a roadmap. Future of Humanity Institute Technical Report, 2008-3, 2008.
  • [Sea80] J. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3(3):417-457, 1980.
  • [Sea90] John Searle. Is the brain's mind a computer program? Scientific American, 262:26-31, 1990.
  • [Sim55] H.A. Simon. A behavioral model of rational choice. The quarterly journal of economics, 69:99-118, 1955.
  • [Tur50] A. Turing. Computing machinery and intelligence. Mind, 59:433-460, 1950.
  • [vNM44] John von Neumann and Oskar Morgenstern. Theory of Games and Economic Behavior. Princeton, NJ, Princeton University Press, 1944.
  • [Wal05] Chip Walter. Kryder's law. Scientific American, 293:32-33, 2005.
  • [Win71] Terry Winograd. Procedures as a representation for data in a computer program for understanding natural language. MIT AI Technical Report, 235, 1971.
  • [Yam12] Roman V. Yampolskiy. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies, 19:194-214, 2012.
  • [Yud08] Eliezer Yudkowsky. Artificial intelligence as a positive and negative factor in global risk. In Nick Bostrom and Milan M. Ćirković, editors, Global catastrophic risks, pages 308-345, New York, 2008. Oxford University Press.
New Comment
9 comments, sorted by Click to highlight new comments since:

Riv12 does not contain the phrase "'Tasmanian technology trap", so the quotation marks are a bit misleading. (I'm not sure how it describes a "trap" at all).

That's Anders's term, and the reference will go to his paper once he's finished it.

Can you provide a brief definition for it? I couldn't find any by googling.

Tasmania fell into a situation where technologies were lost - people did not know the technologies their ancestors possessed (such as for boat building), even if these skills would have remained useful. The reason seemed to be the low population didn't allow the presence of enough specialists.

Minor typo - "shemas" should probably be schemas. The prior versions of this apparently have the same issue.

I assume he is talking about much slower, lower level AIs than Lesswrongian Super AIs for which Friendliness is needed?

[-]knb-30

If I recall correctly, Kurzweil seems to think AIs will be uplifted humans and uploads. I'm basing this on his vaguely stated "humans will merge with machines," and the fact that he gives an approximate date for the Singularity (2045).

No, he predicts that Turing-Test passing AI will be around by 2029, well before he predicts brain emulation, and argues at length that AI will come to grasp the algorithms embodied in the architecture of the brain and use those principles in improving AI software, rather than emulation coming first.

[-]ikrase-10

Yeah, I haven't actually read anything by Kruzweil yet. Of course, uplifted humans, uploads, and human-level AIs built for human socialization...