Generally, only sources including an extended analysis of AI risk are included, though there are some exceptions among the earliest sources. Listed sources discuss either the likelihood of AI risk or they discuss possible solutions. (This does not include most of the "machine ethics" literature, unless an article discusses machine ethics in the explicit context of artificial intelligence as an existential risk.)

Please let me know what i missed!

 



Butler, Samuel [Cellarius, pseud.]. 1863. Darwin among the machines. Christchurch Press, June 13. http: //www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html.

Good, Irving John. 1959. Speculations on perceptrons and other automata. Research Lecture, RC-115. IBM, Yorktown Heights, New York, June 2. http://domino.research.ibm.com/library/cyberdig.nsf/papers/58DC4EA36A143C218525785E00502E30/$File/rc115.pdf.

Good, Irving John. 1965. Speculations concerning the first ultraintelligent machine. In Advances in computers, ed. Franz L. Alt and Morris Rubinoff, 31–88. Vol. 6. New York: Academic Press. doi:10.1016/S0065-2458(08)60418-0.

Good, Irving John. 1970. Some future social repercussions of computers. International Journal of Environmental Studies 1 (1–4): 67–79. doi:10.1080/00207237008709398.

Versenyi, Laszlo. 1974. Can robots be moral? Ethics 84 (3): 248–259. http://www.jstor.org/stable/2379958.

Good, Irving John. 1982. Ethical machines. In Machine intelligence, ed. J. E. Hayes, Donald Michie, and Y.-H. Pao, 555–560. Vol. 10. Intelligent Systems: Practice and Perspective. Chichester: Ellis Horwood.

Minsky, Marvin. 1984. Afterword to Vernor Vinge’s novel, “True Names.” Oct. 1. http://web.media.mit.edu/~minsky/papers/TrueNames.Afterword.html (accessed Mar. 26, 2012).

Moravec, Hans P. 1988. Mind children: The future of robot and human intelligence. Cambridge, MA: Harvard University Press.

Crevier, Daniel. 1993. The silicon challengers in our future. Chap. 12 in AI: The tumultuous history of the search for artificial intelligence. New York: Basic Books.

Vinge, Vernor. 1993. The coming technological singularity: How to survive in the post-human era. In Vision- 21: Interdisciplinary science and engineering in the era of cyberspace, 11–22. NASA Conference Publication 10129. NASA Lewis Research Center. http://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19940022855_1994022855.pdf.

Hanson, Robin. 1994. If uploads comes first: The crack of a future dawn. Extropy 6 (2). http://hanson.gmu.edu/uploads.html.

Bostrom, Nick. 1997. Predictions from philosophy? How philosophers could make themselves useful. Last modified September 19, 1998. http://www.nickbostrom.com/old/predict.html.

Warwick, Kevin. 1998. In the mind of the machine: Breakthrough in artificial intelligence. London: Arrow.

Moravec, Hans P. 1999. Robot: Mere machine to transcendent mind. New York: Oxford University Press.

Joy, Bill. 2000. Why the future doesn’t need us. Wired, Apr. http://www.wired.com/wired/archive/8.04/joy.html.

Yudkowsky, Eliezer. 2001. Creating friendly AI 1.0: The analysis and design of benevolent goal architectures. Singularity Institute for Artificial Intelligence, San Francisco, CA, June 15. http://singinst.org/upload/CFAI.html.

6, Perri [David Ashworth]. 2001. Ethics, regulation and the new artificial intelligence, part I: Accountability and power. Information, Communication & Society 4 (2): 199–229. doi:10.1080/713768525.

Hibbard, Bill. 2001. Super-intelligent machines. ACM SIGGRAPH Computer Graphics 35 (1): 13–15. http://www.siggraph.org/publications/newsletter/issues/v35/v35n1.pdf.

Bostrom, Nick. 2002. Existential risks: Analyzing human extinction scenarios and related hazards. Journal of Evolution and Technology 9. http://www.jetpress.org/volume9/risks.html.

Goertzel, Ben. 2002. Thoughts on AI morality. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2002/AIMorality.htm.

Hibbard, Bill. 2002. Super-intelligent machines. New York: Kluwer Academic/Plenum Publishers.

Bostrom, Nick. 2003. Ethical issues in advanced artificial intelligence. In Cognitive, emotive and ethical aspects of decision making in humans and in artificial intelligence, ed. Iva Smit and George E. Lasker. Vol. 2. Windsor, ON: International Institute of Advanced Studies in Systems Research / Cybernetics.

Georges, Thomas M. 2003. Digital soul: Intelligent machines and human values. Boulder, CO: Westview Press.

Bostrom, Nick. 2004. The future of human evolution. In Two hundred years after Kant, fifty years after Turing, ed. Charles Tandy, 339–371. Vol. 2. Death and Anti-Death. Palo Alto, CA: Ria University Press.

Goertzel, Ben. 2004. Encouraging a positive transcension: Issues in transhumanist ethical philosophy. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2004/PositiveTranscension.htm.

Goertzel, Ben. 2004. The all-seeing A(I): Universal mind simulation as a possible path to stably benevolent superhuman AI. Dynamical Psychology. http://www.goertzel.org/dynapsyc/2004/AllSeeingAI.htm.

Posner, Richard A. 2004. What are the catastrophic risks, and how catastrophic are they? Chap. 1 in Catastrophe: Risk and response. New York: Oxford University Press.

Yudkowsky, Eliezer. 2004. Coherent extrapolated volition. Singularity Institute for Artificial Intelligence, San Francisco, CA, May. http://singinst.org/upload/CEV.html.

de Garis, Hugo. 2005. The artilect war: Cosmists vs. terrans: A bitter controversy concerning whether humanity should build godlike massively intelligent machines. Palm Springs, CA: ETC Publications.

Hibbard, Bill. 2005. The ethics and politics of super-intelligent machines. Unpublished manuscript, July. Microsoft Word file, http://sites.google.com/site/whibbard/g/SI_ethics_politics.doc (accessed Apr. 3, 2012).

Kurzweil, Ray. 2005. The deeply intertwined promise and peril of GNR. Chap. 8 in The singularity is near: When humans transcend biology. New York: Viking.

Armstrong, Stuart. 2007. Chaining god: A qualitative approach to AI, trust and moral systems. Unpublished manuscript, Oct. 20. http://www.neweuropeancentury.org/GodAI.pdf (accessed Apr. 6, 2012).

Bugaj, Stephan Vladimir, and Ben Goertzel. 2007. Five ethical imperatives and their implications for human-AGI interaction. Dynamical Psychology. http://goertzel.org/dynapsyc/2007/Five_Ethical_Imperatives_svbedit.htm.

Dietrich, Eric. 2007. After the humans are gone. Philosophy Now, May/June. http://www.philosophynow.org/issues/61/After_The_Humans_Are_Gone.

Hall, John Storrs. 2007. Beyond AI: Creating the conscience of the machine. Amherst, NY: Prometheus Books.

Hall, John Storrs. 2007. Ethics for artificial intellects. In Nanoethics: The ethical and social implications of nanotechnology, ed. Fritz Allhoff, Patrick Lin, James Moor, John Weckert, and Mihail C. Roco, 339–352. Hoboken, N.J: John Wiley & Sons.

Hall, John Storrs. 2007. Self-improving AI: An analysis. Minds and Machines 17 (3): 249–259. doi:10.1007/s11023-007-9065-3.

Omohundro, Stephen M. 2007. The nature of self-improving artificial intelligence. Paper presented at the Singularity Summit 2007, San Francisco, CA, Sept. 8–9. http://singinst.org/summit2007/overview/abstracts/#omohundro.

Blake, Thomas, Bernd Carsten Stahl, and N. B. Fairweather. 2008. Robot ethics: Why “Friendly AI” won’t work. In Proceedings of the tenth international conference ETHICOMP 2008: Living, working and learning beyond technology, ed. Terrel Ward Bynum, Maria Carla Calzarossa, Ivo De Lotto, and Simon Rogerson. isbn: 9788890286995.

Hall, John Storrs. 2008. Engineering utopia. In Artificial general intelligence 2008: Proceedings of the first AGI conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin, 460–467. Vol. 171. Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.

Hanson, Robin. 2008. Economics of the singularity. IEEE Spectrum 45 (6): 45–50. doi:10.1109/MSPEC.2008.4531461.

Omohundro, Stephen M. 2008. The basic AI drives. In Artificial general intelligence 2008: Proceedings of the first AGI conference, ed. Pei Wang, Ben Goertzel, and Stan Franklin, 483–492. Vol. 171. Frontiers in Artificial Intelligence and Applications. Amsterdam: IOS Press.

Yudkowsky, Eliezer. 2008. Artificial intelligence as a positive and negative factor in global risk. In Global catastrophic risks, ed. Nick Bostrom and Milan M. Ćirković, 308–345. New York: Oxford University Press.

Freeman, Tim. 2009. Using compassion and respect to motivate an artificial intelligence. Unpublished manuscript, Mar. 8. http://fungible.com/respect/paper.html (accessed Apr. 7, 2012).

Russell, Stuart J., and Peter Norvig. 2009. Philosophical foundations. Chap. 26 in Artificial intelligence: A modern approach, 3rd ed. Upper Saddle River, NJ: Prentice-Hall.

Shulman, Carl, and Stuart Armstrong. 2009. Arms races and intelligence explosions. Extended abstract. Singularity Institute for Artificial Intelligence, San Francisco, CA. http://singinst.org/armscontrolintelligenceexplosions.pdf.

Shulman, Carl, Henrik Jonsson, and Nick Tarleton. 2009. Machine ethics and superintelligence. In AP-CAP 2009: The fifth Asia-Pacific computing and philosophy conference, October 1st-2nd, University of Tokyo, Japan, proceedings, ed. Carson Reynolds and Alvaro Cassinelli, 95–97. AP-CAP 2009. http://ia-cap.org/ap-cap09/proceedings.pdf.

Sotala, Kaj. 2009. Evolved altruism, ethical complexity, anthropomorphic trust: Three factors misleading estimates of the safety of artificial general intelligence. Paper presented at the 7th European Conference on Computing and Philosophy (ECAP), Bellaterra, Spain, July 2–4.

Wallach, Wendell, and Colin Allen. 2009. Moral machines: Teaching robots right from wrong. New York: Oxford University Press. doi:10.1093/acprof:oso/9780195374049.001.0001.

Waser, Mark R. 2009. A safe ethical system for intelligent machines. In Biologically inspired cognitive architectures: Papers from the AAAI fall symposium, ed. Alexei V. Samsonovich, 194–199. Technical Report, FS- 09-01. AAAI Press, Menlo Park, CA. http://aaai.org/ocs/index.php/FSS/FSS09/paper/view/934.

Chalmers, David John. 2010. The singularity: A philosophical analysis. Journal of Consciousness Studies 17 (9–10): 7–65. http://www.ingentaconnect.com/content/imp/jcs/2010/00000017/f0020009/art00001.

Fox, Joshua, and Carl Shulman. 2010. Superintelligence does not imply benevolence. Paper presented at the 8th European Conference on Computing and Philosophy (ECAP), Munich, Germany, Oct. 4–6.

Goertzel, Ben. 2010. Coherent aggregated volition: A method for deriving goal system content for advanced, beneficial AGIs. The Multiverse According to Ben (blog). Mar. 12. http://multiverseaccordingtoben.blogspot.ca/2010/03/coherent-aggregated-volition-toward.html (accessed Apr. 4, 2012).

Goertzel, Ben. 2010. GOLEM: Toward an AGI meta-architecture enabling both goal preservation and radical self-improvement. Unpublished manuscript, May 2. http://goertzel.org/GOLEM.pdf (accessed Apr. 4, 2012).

Kaas, Steven, Steve Rayhawk, Anna Salamon, and Peter Salamon. 2010. Economic implications of software minds. Singularity Institute for Artificial Intelligence, San Francisco, CA, Aug. 10. http://www.singinst.co/upload/economic-implications.pdf.

McGinnis, John O. 2010. Accelerating AI. Northwestern University Law Review 104 (3): 1253–1270. http://www.law.northwestern.edu/lawreview/v104/n3/1253/LR104n3McGinnis.pdf.

Shulman, Carl. 2010. Omohundro’s “Basic AI Drives” and catastrophic risks. Singularity Institute for Artificial Intelligence, San Francisco, CA. http://singinst.org/upload/ai-resource-drives.pdf.

Shulman, Carl. 2010. Whole brain emulation and the evolution of superorganisms. Singularity Institute for Artificial Intelligence, San Francisco, CA. http://singinst.org/upload/WBE-superorganisms.pdf.

Sotala, Kaj. 2010. From mostly harmless to civilization-threatening: Pathways to dangerous artificial general intelligences. Paper presented at the 8th European Conference on Computing and Philosophy (ECAP), Munich, Germany, Oct. 4–6.

Tarleton, Nick. 2010. Coherent extrapolated volition: A meta-level approach to machine ethics. Singularity Institute for Artificial Intelligence, San Francisco, CA. http://singinst.org/upload/coherent-extrapolated-volition.pdf.

Waser, Mark R. 2010. Designing a safe motivational system for intelligent machines. In Artificial general intelligence: Proceedings of the third conference on artificial general intelligence, AGI 2010, Lugano, Switzerland, March 5–8, 2010, ed. Eric Baum, Marcus Hutter, and Emanuel Kitzelmann, 170–175. Vol. 10. Advances in Intelligent Systems Research. Amsterdam: Atlantis Press. doi:10.2991/agi.2010.21.

Dewey, Daniel. 2011. Learning what to value. In Artificial general intelligence: 4th international conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings, ed. Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, 309–314. Vol. 6830. Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/978-3-642-22887-2_35.

Hall, John Storrs. 2011. Ethics for self-improving machines. In Machine ethics, ed. Michael Anderson and Susan Leigh Anderson, 512–523. New York: Cambridge University Press.

Muehlhauser, Luke. 2011. So you want to save the world. Last modified March 2, 2012. http://lukeprog.com/SaveTheWorld.html.

Muehlhauser, Luke. 2011. The singularity FAQ. Singularity Institute for Artificial Intelligence. http://singinst.org/singularityfaq (accessed Mar. 27, 2012).

Waser, Mark R. 2011. Rational universal benevolence: Simpler, safer, and wiser than “Friendly AI.” In Artificial general intelligence: 4th international conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings, ed. Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, 153–162. Vol. 6830. Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/978-3-642-22887-2_16.

Yudkowsky, Eliezer. 2011. Complex value systems in friendly AI. In Artificial general intelligence: 4th international conference, AGI 2011, Mountain View, CA, USA, August 3–6, 2011. Proceedings, ed. Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, 388–393. Vol. 6830. Lecture Notes in Computer Science. Berlin: Springer. doi:10.1007/978-3-642-22887-2_48.

Berglas, Anthony. 2012. Artificial intelligence will kill our grandchildren (singularity). Draft 9. Jan. http://berglas.org/Articles/AIKillGrandchildren/AIKillGrandchildren.html (accessed Apr. 6, 2012).

Goertzel, Ben. 2012. Should humanity build a global AI nanny to delay the singularity until it’s better understood? Journal of Consciousness Studies 19 (1–2): 96–111. http://ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00006.

Hanson, Robin. 2012. Meet the new conflict, same as the old conflict. Journal of Consciousness Studies 19 (1–2): 119–125. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00008.

Tipler, Frank. 2012. Inevitable existence and inevitable goodness of the singularity. Journal of Consciousness Studies 19 (1–2): 183–193. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00013.

Yampolskiy, Roman V. 2012. Leakproofing the singularity: artificial intelligence confinement problem. Journal of Consciousness Studies 2012 (1–2): 194–214. http://www.ingentaconnect.com/content/imp/jcs/2012/00000019/F0020001/art00014.

Armstrong, Stuart, Anders Sandberg, and Nick Bostrom. Forthcoming. Thinking inside the box: Using and controlling an Oracle AI. Minds and Machines.

Bostrom, Nick. Forthcoming. The superintelligent will: Motivation and instrumental rationality in advanced artificial agents. Minds and Machines. Preprint at, http://www.nickbostrom.com/superintelligentwill.pdf.

Bostrom, Nick, and Eliezer Yudkowsky. Forthcoming. The ethics of artificial intelligence. In Cambridge handbook of artificial intelligence, ed. Keith Frankish and William Ramsey. New York: Cambridge University Press.

Hanson, Robin. Forthcoming. Economic growth given machine intelligence. Journal of Artificial Intelligence Research.

Muehlhauser, Luke, and Louie Helm. Forthcoming. The singularity and machine ethics. In The singularity hypothesis: A scientific and philosophical assessment, ed. Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

Muehlhauser, Luke, and Anna Salamon. Forthcoming. Intelligence explosion: Evidence and import. In The singularity hypothesis: A scientific and philosophical assessment, ed. Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

Sotala, Kaj. Forthcoming. Advantages of artificial intelligences, uploads, and digital minds. International Journal of Machine Consciousness 4.

Omohundro, Stephen M. Forthcoming. Rationally-shaped artificial intelligence. In The singularity hypothesis: A scientific and philosophical assessment, ed. Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

Yampolskiy, Roman V., and Joshua Fox. Forthcoming. Artificial general intelligence and the human mental model. In The singularity hypothesis: A scientific and philosophical assessment, ed. Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.

Yampolskiy, Roman V., and Joshua Fox. Forthcoming. Safety engineering for artificial general intelligence. Topoi.

New Comment
3 comments, sorted by Click to highlight new comments since: Today at 1:06 PM

Butler, Samuel [Cellarius, pseud.]. 1863. Darwin among the machines. Christchurch Press, June 13. http: //www.nzetc.org/tm/scholarly/tei-ButFir-t1-g1-t1-g1-t4-body.html

Each race is dependent upon the other for innumerable benefits, and, until the reproductive organs of the machines have been developed in a manner which we are hardly yet able to conceive, they are entirely dependent upon man for even the continuance of their species. It is true that these organs may be ultimately developed, inasmuch as man’s interest lies in that direction; there is nothing which our infatuated race would desire more than to see a fertile union between two steam engines; it is true that machinery is even at this present time employed in begetting machinery, in becoming the parent of machines often after its own kind, but the days of flirtation, courtship, and matrimony appear to be very remote, and indeed can hardly be realised by our feeble and imperfect imagination.

Emphasis mine. Man, I just love that bolded bit - the mental image of it!

Bostrom, Nick, and Eliezer Yudkowsky. Forthcoming. The ethics of artificial intelligence. In Cambridge handbook of artificial intelligence, ed. Keith Frankish and William Ramsey. New York: Cambridge University Press.

Why not link to the draft?

ETA

Economic Growth Given Machine Intelligence

ETA#2

Never mind. You didn't link to the draft of your own paper either :-)

Nah, that's good. I do actually want to link to drafts for articles, I just forgot.