You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

jaibot comments on Open thread for December 24-31, 2013 - Less Wrong Discussion

4 Post author: NancyLebovitz 24 December 2013 08:58AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (207)

You are viewing a single comment's thread.

Comment author: jaibot 24 December 2013 02:04:52PM 7 points [-]
Comment author: ygert 24 December 2013 02:21:49PM *  8 points [-]

And it even gives a mostly accurate description of the relevant risk factors!

These researchers are not exactly thinking about a Battlestar Galactica-type situation in which robots resent their enslavement by humans and rise up to destroy their masters out of vengeance—a fear known as the “Frankenstein complex,” which would happen only if we programmed robots to be able to resent such enslavement. That would be, suffice it to say, quite unintelligent of us. Rather, the modern version of concern about long-term risks from AI, summarized in a bit more detail in this TED talk, is that an advanced AI would follow its programming so exactly and with such capability that, because of the fuzzy nature of human values, unanticipated and potentially catastrophic consequences would result unless we planned ahead carefully about how to maintain control of that system and what exactly we wanted it to do.

Comment author: IlyaShpitser 27 December 2013 02:38:19AM 2 points [-]

More people are starting to think about these issues now:

http://www.nature.com/srep/2013/130911/srep02627/full/srep02627.html