TheOtherDave comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (99)
My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?
What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?
Well, for one thing, if I'm unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can't do that. That sure does sound like the sort of work I'd want a moral theory to do.
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don't care about the world as much as yourself?
I don't need a theory to decide I'm unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result.
Yes, both of those seem fairly likely.
It sounds like you're suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent... have I understood you correctly? If so, can you say more about why you believe those things?