You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

syllogism comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: TheOtherDave 30 January 2011 09:58:24PM *  3 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn't therefore become right.

So it seems clear that at least under some circumstances, "wrong" and "wrong_human" don't mean the same thing for EY, and that at least sometimes EY would say that "is X right or wrong?" doesn't depend on what humans happen to want that day.

Now, if by "wrong_human" you don't mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.

In that case, yes, maybe you're right that what you mean by "wrong_human" is also what EY means by "wrong." But I still wouldn't expect him to endorse the idea that what's wrong or right depends in any way on what agents happen to prefer.

Comment author: Matt_Simpson 30 January 2011 10:55:54PM *  2 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human

No one can change right_human, it's a specific utility function. You can change the utility function that humans implement, but you can't change right_human. That would be like changing e^x or 2 to something else. In other words, you're right about what the metaethics posts say, and that's what I'm saying too.

edit: or what jimrandomh said (I didn't see his comment before I posted mine)

Comment author: Lightwave 01 February 2011 10:11:03AM *  1 point [-]

What if we use 'human' as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they're no longer human, hence human_right no longer applies to them, but itself remains unchanged.

Comment author: Matt_Simpson 01 February 2011 09:48:53PM 0 points [-]

human_right still applies to them in the sense that they still should do what's human_right. That's the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don't care about human_right and so won't be motivated to act based on human_right (except insofar as it helps make paperclips).

I'm not necessarily disagreeing with you because it's a little ambiguous how you used the word "applies." If you mean that the modified humans don't care about human_right anymore, I agree. If you mean that the modified humans shouldn't care about human_right, then I disagree.

Comment author: Lightwave 01 February 2011 10:16:50PM *  0 points [-]

I'm not sure why it's necessary to use 'should' to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you're asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there's no confusion over the meaning of 'should'.

Comment author: Matt_Simpson 01 February 2011 10:43:07PM 0 points [-]

As long as you can keep the terms straight, sure. EY's argument was that using "should" in that sense makes it easier to make mistakes related to relativism.

Comment author: TheOtherDave 30 January 2011 11:36:13PM 0 points [-]

OK. At this point I must admit I've lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.

Comment author: jimrandomh 30 January 2011 10:54:55PM 2 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn't therefore become right.

In that case, we would draw a distinction between rightunmodifiedhuman and rightmodifiedhuman, and "right" would refer to the former.