You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheOtherDave comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion

7 Post author: Princess_Stargirl 09 December 2014 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: SilentCal 09 December 2014 06:05:24PM 22 points [-]

My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.

There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.

Comment author: Dagon 09 December 2014 07:52:46PM 4 points [-]

Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?

What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?

Comment author: TheOtherDave 09 December 2014 10:57:49PM 2 points [-]

Well, for one thing, if I'm unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can't do that. That sure does sound like the sort of work I'd want a moral theory to do.

Comment author: Dagon 10 December 2014 08:09:02AM -1 points [-]

Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don't care about the world as much as yourself?

Comment author: TheOtherDave 10 December 2014 04:09:03PM 1 point [-]

I don't need a theory to decide I'm unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.

it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs

Yes, both of those seem fairly likely.

It sounds like you're suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent... have I understood you correctly? If so, can you say more about why you believe those things?