ChaosMote comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (99)
My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option?
What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.