You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Peterdjones comments on asking an AI to make itself friendly - Less Wrong Discussion

-4 Post author: anotheruser 27 June 2011 07:06AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Peterdjones 29 June 2011 04:10:14PM *  0 points [-]

some kind of pure, inherently good, perfect optimal goal that transcends all reason and is true by virtue of existing or something.

But if that is true, the AI will say so. What's more, you kind of need the AI to refrain from acting on it, if it is a human-unfriendly objective moral truth. There are ethical puzzles where it is apparently right to lie or keep schtum, because of the consequences of telling the truth.