You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Mark_Friedenbach comments on Paperclip Maximizer Revisited - Less Wrong Discussion

16 Post author: Jan_Rzymkowski 19 June 2014 01:25AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread. Show more comments above.

Comment author: [deleted] 20 June 2014 10:35:42PM *  0 points [-]

The standard LW argument is that the AI produces infinite paperclips because the human can't successfully program the AI to do what he means rather than exactly what he programs into it.

Is that different from what I was saying? My memory of the sequences, and from standard AI literature is that of paperclip maximizers as 'simple' utility maximizers with hard-coded utility functions. It's relatively straight-forward to write an AI with a self-modifiable goal system. It is also very easy to write a system where its goals are unchanging. The problem of FAI which EY spends significant time explaining in the sequences is that we have no simple goal that we can program into a steadfast goal-driven system, and result in a moral creature. Nor does it even seem possible to write down such a goal, short of encoding a random sampling of human brains in complete detail.