You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

TheAncientGeek comments on Does utilitarianism "require" extreme self sacrifice? If not why do people commonly say it does? - Less Wrong Discussion

7 Post author: Princess_Stargirl 09 December 2014 08:32AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: buybuydandavis 14 December 2014 08:04:25PM -1 points [-]

So we could look at this as Moralos having a ranking plus an 'obligation rule'

There could be Moralos like that, but if we're talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer.

The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism.

Justifying an obligation rule seems philosophically tough...

Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation.

So what is obligation? I think it's what we call our willingness to coerce/punish, up through the higher order of evaluation, and that's similarly the core of what makes something a moral preference.

If you're not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people.

Is this truly the "Utilitarianism" proposed here? Just rewarding, and not punishing or coercing?

I'd feel less creeped out by Utilitarianism if that were so.

Comment author: SilentCal 15 December 2014 06:23:31PM 1 point [-]

Let me zoom out a bit to explain where I'm coming from.

I'm not fully satisfied with any metaethics, and I feel like I'm making a not-so-well-justified leap of faith to believe in any morality. Given that that's the case, I'd like to at least minimize the leap of faith. I'd rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation.

So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define 'obligations' as 'things we're willing to coerce you to do', we can decide whether X is an obligation by asking "Do we prefer a society that coerces X, or one that doesn't?"

*Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.

Comment author: TheAncientGeek 25 April 2015 10:49:51AM 0 points [-]

That's almost rule consequentialism.