You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

RichardKennaway comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion

7 Post author: Princess_Stargirl 09 December 2014 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilentCal 09 December 2014 06:05:24PM 22 points [-]

My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.

There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.

Comment author: RichardKennaway 14 December 2014 04:57:22PM 2 points [-]

The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result.

You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes.

If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people's limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else's to make a dent in the rigours to which it calls you.