You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 31 January 2011 07:21:16PM 2 points [-]

Agreed that a paperclip maximizer can "discover what is moral," in the sense that you're using it here. (Although there's no reason to expect any particular PM to do so, no matter how intelligent it is.)

Can you clarify why this sort of discovery is in any way interesting, useful, or worth talking about?

Comment author: Matt_Simpson 31 January 2011 07:28:23PM 0 points [-]

It drives home the point that morality is an objective feature of the universe that doesn't depend on the agent asking "what should I do?"

Comment author: TheOtherDave 31 January 2011 07:37:57PM 2 points [-]

Huh. I don't see how it drives home that point at all. But OK, at least I know what your intention is... thank you for clarifying that.

Comment author: XiXiDu 01 February 2011 10:57:44AM 0 points [-]

...morality is an objective feature of the universe...

Fascinating. I still don't understand in what sense this could be true, except maybe the way I tried to interpret EY here and here. But those comments simply got downvoted without any explanation or attempt to correct me, therefore I can't draw any particular conclusion from those downvotes.

You could argue that morality (what is right?) is human and other species will agree that from a human perspective what is moral is right is right is moral. Although I would agree, I don't understand how such a confusing use of terms is helpful.

Comment author: Matt_Simpson 01 February 2011 09:51:53PM *  1 point [-]

Morality is just a specific set of terminal values. It's an objective feature of the universe because... humans have those terminal values. You can look inside the heads of humans and discover them. "Should," "right," and "moral," in EY's terms, are just being used as a rigid designators to refer to those specific values.

I'm not sure I understand the distinction between "right" and "moral" in your comment.