Dagon comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong

7 Post author: Princess_Stargirl 09 December 2014 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dagon 09 December 2014 07:52:46PM 4 points [-]

Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?

What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?

Comment author: SilentCal 09 December 2014 09:03:46PM 5 points [-]

There's a sort of Tortoise-Achilles type problem in interpreting the word 'should' where you have to somehow get from "I should do X" to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We're used to doing this with boolean-valued morality like deontology, so the problem isn't intuitively problematic.

Asking utilitarianism to answer "Should I do X?" is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you're lossily turning utilitarianism's outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert "Utility of X is 100; Utility of Y is 80; Utility of Z is -9999" into being influenced towards X rather than Y and definitely not doing Z.

The purpose of the theory is that it ranks your options, and you're more likely to do higher-ranked options than you otherwise would be. It's classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn't do so in way that's easily explained in the wrong language.

Comment author: peterward 11 December 2014 03:41:23AM -1 points [-]

Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.

Comment author: SilentCal 12 December 2014 06:41:43PM 2 points [-]

From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you're going to be. A utility function answers the first part. If you're a committed maximizer, you have your answer to the second part. Most of us aren't, so we have a tough decision there that the utility function doesn't answer.

Comment author: TheOtherDave 09 December 2014 10:57:49PM 2 points [-]

Well, for one thing, if I'm unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can't do that. That sure does sound like the sort of work I'd want a moral theory to do.

Comment author: Dagon 10 December 2014 08:09:02AM -1 points [-]

Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don't care about the world as much as yourself?

Comment author: TheOtherDave 10 December 2014 04:09:03PM 1 point [-]

I don't need a theory to decide I'm unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.

it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs

Yes, both of those seem fairly likely.

It sounds like you're suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent... have I understood you correctly? If so, can you say more about why you believe those things?

Comment author: jkaufman 09 December 2014 08:51:48PM *  1 point [-]

An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.

Comment author: ChaosMote 14 December 2014 03:52:01AM 0 points [-]

Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.