You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Jan_Rzymkowski comments on Paperclip Maximizer Revisited - Less Wrong Discussion

16 Post author: Jan_Rzymkowski 19 June 2014 01:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jan_Rzymkowski 19 June 2014 11:13:46AM *  0 points [-]

Yeah. Though actually it's more of a simplified version of a more serious problem.

One day you may give AI precise set of instructions, which you think would make good. Like find a way of curing diseases, but without harming patients, and without harming people for the sake of research and so on. And you may find that your AI is perfectly friendly, but it wouldn't yet mean it actually is. It may simply have learned human values as a mean of securing its existence and gaining power.

EDIT: And after gaining enough power it may as well help improve human health even more or reprogram human race to think unconditionaly that diseases were eradicated.