This article contains a summary of the history of AI risk thought, based on Luke Muehlhauser’s AI Risk & Opportunity: A Strategic Analysis. It covers the development and evolution of ideas and concepts regarding AI risk from the early industrial revolution to the present days and the people involved.
The first registered thoughts on the possibility of artificial, machine intelligence, becoming a risk to humanity stem in the late Industrial Revolution. It was Samuel Butler, in 1863, in his Darwin Among the Machines that suggested machines could eventually replace humans as the dominant agents on Earth. This idea was initially picked up by science fiction writers, most notably Karel Čapek and his “R.U.R” (1921) and John W. Campbell’s “The Last Evolution” (1932) and “The Machine” (1935). Soon after followed Isaac Asimov and his famous “Runaround” (1942), where the Three Laws of Robotics were stated, and with them the first concerns with AI safety and the creation of rules to deal with AI agents.
Deeply involved in the first steps of AI development, Alan Turing stated soon after, during 1950, that we should expect machines to be able to hold conversations indistinguishably from humans. His colleague, I.J. Good was responsible to coin the term intelligence explosion in 1965, when discussing the moment when a machine was made that could start creating other, better machines. It can be said, however, that Good based his work on the previous Von Neumann speculations regarding complexity (1948, 1949). Curiously enough, despite so many authors embracing the idea of a sudden explosion in the development of AI and the recognition of risks it might bring, it was only in 1970 that, through Good, an explicit statement is made. The author hoped that in a decade from there, the matter has been thoroughly discussed. It was not, as we now know, and the author himself, again in 1982 expressed his views on the possible design of a machine ethics framework.
From the 1980's the preoccupations with AI safety increased, even among the critics. Jack Schwartz, for instance, speculated that a new era of economical, sociological and historical definition could be overwhelming to humanity. He was much in line with Solomonoff's thoughts. Moravec, on the other hand, sustained that although AI could represent and existential risk, it was probably one society should face in order to solve other threats. This early era also sees the emergence Marvin Minsky’s assumption (1984) and worry that we might find it hard to make AI do what we want, due to our difficulty in expressing our true desires. His ideas resonate closely with the value extrapolation and CEV problem.
The modern era of though on AI risk was brought on mainly due to Vernor Vinge’s popularization of the “intelligence explosion” concept in 1992. He was also responsible to write the first novel regarding the existential risks brought on by self-improving AI, "A Fire Upon Deep" (1993) and for the increasing discussions on the extropian mailing list.
These early brainstorms gave rise to nowadays most prominent writers on AI risk including Eliezer Yudkowsky and Ben Goertzel, as well as nuclear concepts like Friendly AI, Oracle AI or Nanny AI, . At the same time, AI research turned to philosophy and the problem of implementing moral values in artificial agents was brought to discussion. This has led to the evolution of fields such artificial morality and computational ethics, clearly described by Wallace & Allen’s work “Moral Machines: Teaching robots right from wrong” (2009). The Singularity Institute was the natural result of Yudkowsky’s line of research into AI risk and opportunity and was formed in July of 2000, becoming the largest organized entity for the study of such problems, with an increasing role and aim, until the present day.
All this factors have led AI risk to become increasingly mainstream, reaching a vaster audience of both scientists - including peer-reviewd journals with special issues on the theme - and general population. It has seen an increased expansion in the last two decades, both in quantity and quality, but still leaving room for unanswered problems and serious difficulties to tackle.