You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Matt_Simpson comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: Matt_Simpson 29 January 2011 11:56:24PM 3 points [-]

but what do you mean when you say that I should_MattSimpson maximize your preferences?

I mean that according to my preferences, you, me, and everyone else should maximize them. If you ask what should_MattSimpson be done, the short answer is maximize my preferences. Similarly, if you ask what should_lukeproq be done, the short answer is to maximize your preferences. It doesn't matter who does the asking. If you ask should_agent should be done, you should maximize agent's preferences. There is no "should" only should_agent's. (Note, Eliezer calls should_human "should." I think it's an error of terminology, personally. It obscures his position somewhat).

We already have a term that matches Eliezer's use of "ought" and "should" quite nicely: it's called the "prudential ought." The term "moral ought" is usually applied to a different location in concept space, whether or not it successfully refers.

Then Eliezer's position is that all normativity is prudential normativity. But without the pop-culture connotations that come with this position. In other words, this doesn't mean you can "do whatever you want." You probably do, in fact, value other people, you're a human after all. So murdering them is not ok, even if you know you can get away with it. (Note that this last conclusion might be salvageable even if there is no should_human.)

As for why Eliezer (and others here) think there is a should_human (or that human values are similar enough to talk about such a thing), the essence of the argument rests on ev-psych, but I don't know the details beyond "ev-psych suggests that our minds would be very similar."

Comment author: lukeprog 30 January 2011 12:02:15AM *  2 points [-]

Okay, that make sense.

Does Eliezer claim that murder is wrong for every agent? I find it highly likely that in certain cases, an agent's murder of some person will best satisfy that agent's preferences.

Comment author: Matt_Simpson 30 January 2011 09:02:03PM 2 points [-]

Murder is certainly not wrong_x for every agent x - we can think of an agent with a preference for people being murdered, even itself. However, it is almost always wrong_MattSimpson and (hopefully!) almost always wrong_lukeproq. So it depends on which question your are asking. If you're asking "is murder wrong_human for every agent?" Eliezer would say yes. If you're asking "is murder wrong_x for every agent x?" Eliezer would say no.

(I realize it was clear to both you and me which of the two you were asking, but for the benefit of confused readers, I made sure everything was clear)

Comment author: TheOtherDave 30 January 2011 09:06:22PM *  3 points [-]

I would be very surprised if EY gave those answers to those questions.

It seems pretty fundamental to his view of morality that asking about "wrong_human" and "wrong_x" is an important mis-step.

Maybe murder isn't always wrong, but it certainly doesn't depend (on EY's view, as I understand it) on the existence of an agent with a preference for people being murdered (or the absence of such an agent).

Comment author: Matt_Simpson 30 January 2011 09:20:12PM *  2 points [-]

Maybe murder isn't always wrong, but it certainly doesn't depend (on EY's view, as I understand it) on the existence of an agent with a preference for people being murdered (or the absence of such an agent).

That's because for EY, "wrong" and "wrong_\human" mean the same thing. It's semantics. When you ask "is X right or wrong?" in the every day sense of the term, you are actually asking "is X right_human or wrong_human?" But if murder is wrong_human, that doesn't mean it's wrong_clippy, for example. In both cases you are just checking a utility function, but different utility functions give different answers.

Comment author: TheOtherDave 30 January 2011 09:58:24PM *  3 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn't therefore become right.

So it seems clear that at least under some circumstances, "wrong" and "wrong_human" don't mean the same thing for EY, and that at least sometimes EY would say that "is X right or wrong?" doesn't depend on what humans happen to want that day.

Now, if by "wrong_human" you don't mean what humans would consider wrong the day you evaluate it, but rather what is considered wrong by humans today, then all of that is irrelevant to your claim.

In that case, yes, maybe you're right that what you mean by "wrong_human" is also what EY means by "wrong." But I still wouldn't expect him to endorse the idea that what's wrong or right depends in any way on what agents happen to prefer.

Comment author: Matt_Simpson 30 January 2011 10:55:54PM *  2 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human

No one can change right_human, it's a specific utility function. You can change the utility function that humans implement, but you can't change right_human. That would be like changing e^x or 2 to something else. In other words, you're right about what the metaethics posts say, and that's what I'm saying too.

edit: or what jimrandomh said (I didn't see his comment before I posted mine)

Comment author: Lightwave 01 February 2011 10:11:03AM *  1 point [-]

What if we use 'human' as a rigid designator for unmodified-human. Then in case aliens convert people into paperclip-maximizers, they're no longer human, hence human_right no longer applies to them, but itself remains unchanged.

Comment author: Matt_Simpson 01 February 2011 09:48:53PM 0 points [-]

human_right still applies to them in the sense that they still should do what's human_right. That's the definition of should. (Remember, should refers to a specific set of terminal values, those that humans happen to have, called human_right) However, these modified humans, much like clippy, don't care about human_right and so won't be motivated to act based on human_right (except insofar as it helps make paperclips).

I'm not necessarily disagreeing with you because it's a little ambiguous how you used the word "applies." If you mean that the modified humans don't care about human_right anymore, I agree. If you mean that the modified humans shouldn't care about human_right, then I disagree.

Comment author: Lightwave 01 February 2011 10:16:50PM *  0 points [-]

I'm not sure why it's necessary to use 'should' to mean morally_should, it could just be used to mean decision-theoretic_should. E.g. if you're asked what a chess playing computer program should do to win a particular game, you could give a list of moves it should make. And when a human asks what they should do related to a moral question, you can first use the human_right function to determine what is the desired state of the world that they want to achieve, and then ask what you should do (as in decision-theoretic_should, or as in what moves/steps you need to execute, in analogy to the chess program) to create this state. Thus morality is contained within the human_right function and there's no confusion over the meaning of 'should'.

Comment author: TheOtherDave 30 January 2011 11:36:13PM 0 points [-]

OK. At this point I must admit I've lost track of why these various suggestively named utility functions are of any genuine interest, so I should probably leave it there. Thanks for clarifying.

Comment author: jimrandomh 30 January 2011 10:54:55PM 2 points [-]

It seems clear from the metaethics posts is that if a powerful alien race comes along and converts humanity into paperclip-maximizers, such that making many paperclips comes to be right_human, EY would say that making many paperclips doesn't therefore become right.

In that case, we would draw a distinction between rightunmodifiedhuman and rightmodifiedhuman, and "right" would refer to the former.

Comment author: hairyfigment 30 January 2011 03:17:33AM *  0 points [-]

Murder as I define it seems universally wrong_victim, but I doubt you could literally replace "victim" with any agent's name.

Comment author: torekp 01 February 2011 01:14:23AM 0 points [-]

If you ask what should_MattSimpson be done, the short answer is maximize my preferences.

I find the talk of "should_MattSimpson" very unpersuasive given the availability of alternative phrasings such as "approved_MattSimpson" or "valued_MattSimpson". I have read below that EY discourages such talk, but it seems that's for different reasons than mine. Could someone please point me to at least one post in the sequence which (almost/kinda/sorta) motivates such phrasings?

Comment author: Matt_Simpson 01 February 2011 09:43:58PM 0 points [-]

Alternate phrasings such as those you listed would probably be less confusing, i.e. replacing "should" in "should_X" with "valued" and reserving "should" for "valued_human".