You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Normal_Anomaly comments on Q&A with experts on risks from AI #3 - Less Wrong Discussion

13 Post author: XiXiDu 12 January 2012 10:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (28)

You are viewing a single comment's thread.

Comment author: Normal_Anomaly 15 January 2012 06:45:31PM 5 points [-]

As such the problems we're likely to have with AI are less 'Terminator' and more 'Sorcerer's apprentice'

This is true and important and a lot of the other experts don't get it. Unfortunately, Uther seems to think that SIAI/LW/Xixidu doesn't get it either, and

These types of problems are less worrying as, in general, the AI isn't trying to actively hurt humans.

Shows that he hasn't thought about all the ways that "sorcerer's apprentice" AIs could go horribly wrong.

Comment author: Emile 16 January 2012 09:39:30AM 7 points [-]

Yeah, I agree that Xixidu's mails could make it clearer that he's aware (or that LessWrong is aware) that "Sorcerer's Apprentice" is a better analogy than "Terminator", to get slightly responses that aren't "Terminator is fiction, silly!".