You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Eliezer_Yudkowsky comments on Is a paperclipper better than nothing? - Less Wrong Discussion

6 Post author: DataPacRat 24 May 2013 07:34PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (116)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 25 May 2013 02:43:56AM 5 points [-]

Desires and preferences about paperclips can be satisfied. They can sense, learn, grow, reproduce, etc.

Do you take that personally seriously or is it something someone else believes? Human experience with desire satisfaction and "learning" and "growth" isn't going to transfer over to how it is for paperclip maximizers, and a generalization that this is still something that matters to us is unlikely to succeed. I predict an absence of any there there.

Comment author: CarlShulman 25 May 2013 05:13:14PM *  4 points [-]

Yes, I believe that the existence of the thing itself, setting aside impacts on other life that it creates or interferes with, is better than nothing, although far short of the best thing that could be done with comparable resources.

Comment author: MugaSofer 28 May 2013 03:39:28PM -1 points [-]

Human experience with desire satisfaction and "learning" and "growth" isn't going to transfer over to how it is for paperclip maximizers

This is far from obvious. There are definitely people who claim "morality" is satisfying the preferences of as many agents as you can.

If morality evolved for game-theoretic reasons, there might even be something to this, although I personally think it's too neat to endorse.