You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Matt_Simpson comments on What is Eliezer Yudkowsky's meta-ethical theory? - Less Wrong Discussion

33 Post author: lukeprog 29 January 2011 07:58PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (368)

You are viewing a single comment's thread. Show more comments above.

Comment author: Matt_Simpson 29 January 2011 10:52:43PM *  2 points [-]

No, this is called preference utilitarianism.

Usually utilitarianism means maximize the utility of all people/agents/beings of moral worth (average or sum depending on the flavor of utilitarianism). Eliezer's metaethics says only maximize your own utility. There is a clear distinction.

Edit: but you are correct about considering preferences the foundation of ethics. I should have been more clear

Comment author: Jayson_Virissimo 30 January 2011 06:37:51AM *  2 points [-]

Eliezer's metaethics says only maximize your own utility.

Isn't that bog-standard ethical egoism? If that is the case, then I really misunderstood the sequences.

Comment author: Matt_Simpson 30 January 2011 08:51:52PM *  0 points [-]

Maybe. Sometimes ethical egoism sounds like it says that you should be selfish. If that's the case, than no, they are not the same. But sometimes it just sounds like it says you should do whatever you want to do, even if that includes helping others. If that's the case, they sound the same to me.

edit: Actually, that's not quite right. On the second version, egoism give the same answer as EY's metaethics for all agents who have "what is right" as their terminal values, but NOT for any other agent. Egoism in this sense defines "should" as "should_X" where X is the agent asking what should be done. For EY, "should" is always "should_human" no matter who is asking the question.

Comment author: jimrandomh 29 January 2011 11:54:51PM *  0 points [-]

Usually utilitarianism means maximize the utility of all people/agents/beings of moral worth (average or sum depending on the flavor of utilitarianism). Eliezer's metaethics says only maximize your own utility. There is a clear distinction.

Indeed, but I'd like to point out that this is not an answer about what to do or what's good and bad, merely the rejection of a commonly claimed (but incorrect) statement about what structure such an answer should have.

Comment author: Matt_Simpson 30 January 2011 12:00:16AM 0 points [-]

I think think I disagree, but I'm not sure I understand. Care to explain further?

Comment author: jimrandomh 30 January 2011 12:33:27AM 0 points [-]

(Note: This comment contains positions which came from my mind without an origin tag attached. I don't remember reading anything by Eliezer which directly disagrees with this, but I don't represent this as anyone's position but my own.)

"Standard" utilitarianism works by defining a separate per-agent utility functions to represent each person's preferences, and averaging (or summing) them to produce a composite utility function which every utilitarianism is supposed to optimize. The exact details of what the per-agent utility functions look like, and how you combine them, differ from flavor to flavor. However, this structure - splitting the utility function up into per-agent utility functions plus an agent utility function - is wrong. I don't know what a utility function that fully captured human values would look like, but I do know that it can't be split and composed this way.

It breaks down most obviously when you start varying the number of agents; in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter); in the variant where you average utilities, an outcome where only one person exists but he lives an extra-awesome life is better than an outcome where many people lead merely-awesome lives.

Split-agent utility functions are also poorly equipped to deal with the problem of weighing agents against each other. if there's a scenario where one person's utility function diverges to infinity, then both sum- and average-utility aggregation claim that it's worth sacrificing everyone else to make sure that happens (the "utility monster" problem).

And the thing is, writing a utility function that captures human values is a hard and unsolved problem, and splitting it up by agent doesn't actually bring us any closer; defining the single-agent function is just as hard as defining the whole thing.

Comment author: Matt_Simpson 30 January 2011 09:08:05PM 3 points [-]

I was about to cite the same sorts of things to explain why they DO disagree about what is good and bad. In other words, I agree with you about utilitarianism being wrong about the structure of ethics in precisely the way you described, but I think that also entails utilitarianism coming to different concrete ethical conclusions. If a murderer really likes murdering - it's truly a terminal value - the utilitarian HAS to take that into account. On Eliezer's theory, this need not be so. So you can construct a hypothetical where the utilitarian has to allow someone to be murdered simply to satisfy a (or many) murderer's preference where on Eliezer's theory, nothing of this nature has to be done.

Comment author: jimrandomh 30 January 2011 10:27:02PM 1 point [-]

That is a problem for average-over-agents utilitarianism, but not a fatal one; the per-agent utility function you use need not reflect all of that agent's preferences, it can reflect something narrower like "that agent's preferences excluding preferences that refer to other agents and which those agents would choose to veto". (Of course, that's a terrible hack, which must be added to the hacks to deal with varying population sizes, divergence, and so on, and the resulting theory ends up being extremely inelegant.)

Comment author: Matt_Simpson 30 January 2011 10:59:46PM 1 point [-]

True enough, there are always more hacks a utilitarian can throw on to their theory to avoid issues like this.

Comment author: endoself 31 January 2011 09:46:01AM 1 point [-]

in the variant where you sum up utilities, an outcome where many people live lives just barely worth living seems better than an outcome where fewer people live amazingly good lives (but we actually prefer the latter);

Are you sure of this? It sounds a lot like scope insensitivity. Remember, lives barely worth living are still worth living.

if there's a scenario where one person's utility function diverges to infinity, then both sum- and average-utility aggregation claim that it's worth sacrificing everyone else to make sure that happens (the "utility monster" problem).

Again, this seems like scope insensitivity.

Comment deleted 29 January 2011 11:05:08PM [-]
Comment author: Matt_Simpson 29 January 2011 11:11:58PM 3 points [-]

Yeah, that's probably right. But notice that even in that case, unlike the utilitarian, there are no thorny issues about how to deal with non-human agents. If we run into an alien that has a serious preference for raping humans, the utilitarian only has ad-hoc ways of deciding whether or not the alien's preference counts. Eliezer's metaethics handles it elegantly: check your utility function. Of course, that's easier said than done in the real world, but it does solve many philosophical problems associated with utilitarianism.