You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

SilentCal comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion

7 Post author: Princess_Stargirl 09 December 2014 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilentCal 09 December 2014 06:05:24PM 22 points [-]

My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.

There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.

Comment author: Dagon 09 December 2014 07:52:46PM 4 points [-]

Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?

What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?

Comment author: SilentCal 09 December 2014 09:03:46PM 5 points [-]

There's a sort of Tortoise-Achilles type problem in interpreting the word 'should' where you have to somehow get from "I should do X" to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We're used to doing this with boolean-valued morality like deontology, so the problem isn't intuitively problematic.

Asking utilitarianism to answer "Should I do X?" is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you're lossily turning utilitarianism's outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert "Utility of X is 100; Utility of Y is 80; Utility of Z is -9999" into being influenced towards X rather than Y and definitely not doing Z.

The purpose of the theory is that it ranks your options, and you're more likely to do higher-ranked options than you otherwise would be. It's classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn't do so in way that's easily explained in the wrong language.

Comment author: peterward 11 December 2014 03:41:23AM -1 points [-]

Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.

Comment author: SilentCal 12 December 2014 06:41:43PM 2 points [-]

From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you're going to be. A utility function answers the first part. If you're a committed maximizer, you have your answer to the second part. Most of us aren't, so we have a tough decision there that the utility function doesn't answer.