Chist Hallquist wrote the following in an article (if you know the article please, please don't bring it up, I don't want to discuss the article in general):


"For example, utilitarianism apparently endorses killing a single innocent person and harvesting their organs if it will save five other people. It also appears to imply that donating all your money to charity beyond what you need to survive isn’t just admirable but morally obligatory. "


The non-bold part is not what is confusing me. But where does the "obligatory" part come in. I don't really how its obvious what, if any, ethical obligations utilitarianism implies. given a set of basic assumptions utilitarianism lets you argue whether one action is more moral than another. But I don’t see how its obvious which, if any, moral benchmarks utilitarianism sets for “obligatory.” I can see how certain frameworks on top of utilitarianism imply certain moral requirements. But I do not see how the bolded quote is a criticism of the basic theory of utilitarianism.


However this criticism comes up all the time. Honestly the best explanation I could come up with was that people were being unfair to utilitarianism and not thinking through their statements. But the above quote is by HallQ who is intelligent and thoughtful. So now I am genuinely very curious.


Do you think utilitarianism really require such extreme self sacrifice and if so why? And if it does not require this why do so many people say it does? I am very confused and would appreciate help working this out.


edit:


I am having trouble asking this question clearly. Since utilitarianism is probably best thought of as a cluster of beliefs. So its not clear what asking "does utilitarianism imply X" actually means. Still I made this post since I am confused. Many thoughtful people identity as utilitarian (for example Ozy and theunitofcaring) yet do not think people have extreme obligations. However I can think of examples where people do not seem to understand the implications of their ethical frameowrks. For example many Jewish people endorse the message of the following story:



Rabbi Hilel was asked to explain the Torah while standing on one foot and responded "What is hateful to you, do not do to your neighbor. That is the whole Torah; the rest is the explanation of this--go and study it!"


The story is presumably apocryphal but it is repeated all the time by Jewish people. However its hard to see how the story makes even a semblance of sense. The torah includes huge amounts of material that violates the "golden Rule" very badly. So people who think this story gives even a moderately accurate picture of the Torah's message are mistaken imo.

New to LessWrong?

New Comment
100 comments, sorted by Click to highlight new comments since: Today at 9:39 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.

There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.

7[anonymous]9y
A useful word here is "supererogation", but this still implies that there's a baseline level of duty, which itself implies that it's possible even in principle to calculate a baseline level of duty. There may be cultural reasons for the absence of the concept: some Catholics have said that Protestantism did away with supererogation entirely. My impression is that that's a one-line summary of something much more complex (though possibly with potential toward the realization of the one-line summary), but I don't know much about it.
7JenniferRM9y
Supererogation was part of the moral framework that justified indulgences. The idea was that the saints and the church did lots of stuff that was above and beyond the necessary amounts of good (and God presumably has infinitely deep pockets if you're allowed to tap Him for extra), and so they had "credit left over" that could be exchanged for money from rich sinners. The protestants generally seem to have considered indulgences to be part of a repugnant market and in some cases made explicit that the related concept of supererogation itself was a problem. In Mary at the Foot of the Cross 8: Coredemtion as Key to a Correct Understanding of Redemption on page 389 there is a quick summary of a Lutheran position, for example: The setting of the "zero point" might in some sense be arbitrary... a matter of mere framing. You could frame it as people already all being great, but with the option to be better. You could frame it as having some natural zero around the point of not actively hurting people and any minor charity counting as a bonus. In theory you could frame it as everyone being terrible monsters with a minor ability to make up a tiny part of their inevitable moral debt. If it is really "just framing" then presumably we could fall back to sociological/psychological empiricism, and see which framing leads to the best outcomes for individuals and society. On the other hand, the location of the zero level can be absolutely critical if we're trying to integrate over a function from now to infinity and maximize the area under the curve. SisterY's essay on suicide and "truncated utility functions" relies on "being dead" having precisely zero value for an individual, and some ways of being alive having a negative value... in these cases the model suggests that suicide and/or risk taking can make a weird kind of selfish sense. If you loop back around to the indulgence angle, one reading might be that if someone sins then they are no longer perfectly right with their
3Sarunas9y
Please, do tell, that sounds very interesting. It seems to me that systems that put "zero point" very high rely a lot on something like extrinsic motivation, whereas systems that put "zero point" very low rely mostly on intrinsic motivation. In addition to that, if you have 1000 euros, and you desperately need to have 2000 and you play a game where you have to bet on a result of a coin toss, then you maximize your probability of ever reaching that sum by going all in. Whereas if you have 1000 and need to stay above 500, then you place your bets as conservatively as possible. Perhaps putting zero very high encourages "all in" moral gambles, encouraging unusual acts that might have high variance of moral value (if they succeed to achieve high moral value, they are called heroic acts)? Perhaps putting zero very low encourages playing conservatively, doing a lot of small acts instead of one big heroic act.
0SilentCal9y
The word may have fallen out of favor, but I think the concept of "good, but not required" is alive and well in almost all folk morality. It's troublesome for (non-divine-command) philosophical approaches because you have to justify the line between 'obligation' and 'supererogation' somehow. I suspect the concept might sort of map onto a contractarian approach by defining 'obligatory' as 'society should sanction you for not doing it' and 'supererogatory' as 'good but not obligatory', though that raises as many questions as it answers.
4Dagon9y
Huh? So your view of a moral theory is that it ranks your options, but there's no implication that a moral agent should pick the best known option? What purpose does such a theory serve? Why would you classify it as a "moral theory" rather than "an interesting numeric excercise"?
7SilentCal9y
There's a sort of Tortoise-Achilles type problem in interpreting the word 'should' where you have to somehow get from "I should do X" to doing X; that is, in converting the outputs of the moral theory into actions (or influence on actions). We're used to doing this with boolean-valued morality like deontology, so the problem isn't intuitively problematic. Asking utilitarianism to answer "Should I do X?" is an attempt to reuse our accustomed solution to the above problem. The trouble is that by doing so you're lossily turning utilitarianism's outputs into booleans, and every attempt to do this runs into problems (usually demandingness). The real answer is to solve the analogous problem with numbers instead of booleans, to somehow convert "Utility of X is 100; Utility of Y is 80; Utility of Z is -9999" into being influenced towards X rather than Y and definitely not doing Z. The purpose of the theory is that it ranks your options, and you're more likely to do higher-ranked options than you otherwise would be. It's classified as a moral theory because it causes you to help others and promote the overall good more than self-interest would otherwise lead you to. It just doesn't do so in way that's easily explained in the wrong language.
-1peterward9y
Isn't a "boolean" right/wrong answer exactly what utilitarianism promises in the marketing literature? Or, more precisely doesn't it promise to select for us the right choice among collection of alternatives? If the best outcomes can be ranked--by global goodness, or whatever standard--then logically there is a winner or set of winners which one may, without guilt, indifferently choose from.
3SilentCal9y
From a utilitarian perspective, you can break an ethical decision problem down into two parts: deciding which outcomes are how good, and deciding how good you're going to be. A utility function answers the first part. If you're a committed maximizer, you have your answer to the second part. Most of us aren't, so we have a tough decision there that the utility function doesn't answer.
3TheOtherDave9y
Well, for one thing, if I'm unwilling to sign up for more than N personal inconvenience in exchange for improving the world, such a theory lets me take the set of interventions that cost me N or less inconvenience and rank them by how much they improve the world, and pick the best one. (Or, in practice, to approximate that as well as I can.) Without such a theory, I can't do that. That sure does sound like the sort of work I'd want a moral theory to do.
-1Dagon9y
Okay, but it sounds like either the theory is quite incomplete, or your limit of N is counter to your moral beliefs. What do you use to decide that world utility would not be improved by N+1 personal inconvenience, or to decide that you don't care about the world as much as yourself?
2TheOtherDave9y
I don't need a theory to decide I'm unwilling to sign up for more than N personal inconvenience; I can observe it as an experimental result. Yes, both of those seem fairly likely. It sounds like you're suggesting that only a complete moral theory serves any purpose, and that I am in reality internally consistent... have I understood you correctly? If so, can you say more about why you believe those things?
2jefftk9y
An agent should pick the best options they can get themselves to pick. In practice this will not be the ones that maximizes utility as they understand it, but it will be ones with higher utility than if they just did whatever they felt like. And, more strongly, it this gives higher utility than if they tried to do as many good things as possible without prioritizing the really important ones.
0ChaosMote9y
Such a moral theory can be used as one of the criterion in a multi-criterion decision system. This is useful because in general people prefer being more moral to being less moral, but not to the exclusion of everything else. For example, one might genuinely want to improve the work and yet be unwilling to make life-altering changes (like donating all but the bare minimum to charity) to further this goal.
3Richard_Kennaway9y
You have to get decisions out of the moral theory. A decision is a choice of a single thing to do out of all the possibilities for action. For any theory that rates possible actions by a real-valued measure, maximising that measure is the result the theory prescribes. If that does not give a realistically human-followable result, then either you give up the idea of measuring decisions by utility, or you take account of people's limitations in defining the utility function. However, if you believe your utility function should be a collective measure of the well-being of all sentient individuals (that is, if you not merely have a utility function, but are a utilitarian), of which there are at least 7 billion, you would have to rate your personal quality of life vastly higher than anyone else's to make a dent in the rigours to which it calls you.
2Larks9y
I'm not sure you can really say it's a 'misuse' if it's how Bentham used it. He is essentially the founder of modern utilitarianism. If any use is a misuse, it is scalar utilitarianism. (I do not think that is a misuse either).
0SilentCal9y
Fair point... I think the way I see it is that Bentham discovered the core concept of utilitarianism and didn't build quite the right structure around it. My intention is to make ethical/metaethical claims, not historical/semantic ones... does that make sense? (It's true I haven't offered a detailed counterargument to anyone who actually supports the maximizing version; I'm assuming in this discussion that its demandingness disqualifies it)
0ChaosMote9y
It might be useful to distinguish between a "moral theory" which can be used to compare the morality of different actions and a "moral standard" which is a boolean rule use to determine what is morally 'permissible' and what is morally 'impermissible'. I think part of the point your post makes is that people really want a moral standard, not a moral theory. I think that makes sense; with a moral system, you have a course of action guaranteed to be "good", whereas a moral theory makes no such guarantee. Furthermore, I suspect that the commonly accepted societal standard is "you should be as moral as possible", which means that a moral theory is translated into a moral standard by treating the most moral option as "permissible" and everything else as "impermissible". This is exactly what occurs in the text quoted by OP; it takes the utilitarian moral system and projects it on a standard according to which only the most moral option is permissible, making it obligatory.

It basically depends whether you're a maximising utilitarian or a scalar utilitarian. The former says that you should do the best thing. The latter is less harsh in that it just says that better actions are better without saying that you necessarily have to do the best one.

3Gondolinian9y
Thanks for the link. I like your terminology better than mine. :)

The main difference between a utility function based approach is that there is no concept of "sufficient effort". Every action gets an (expected) utility attached to it. Sending £10 to an efficient charity is X utilitons above not doing so; but selling everything you own to donate to the charity is (normally) even higher.

So I think the criticism is accurate, in that humans almost never achieve perfection following utility; there's always room for more effort, and there's no distinction between actions that are "allowed" versus "req... (read more)

I thought about this question a while ago and have been meaning to write about it sometime. This is a good opportunity.

Terminology: Other commenters are pointing out that there are differing definitions of the word "utilitarianism". I think it is clear that the article in question is talking about utilitarianism as an ethical theory (or rather, a family of ethical theories). As such, utilitarianism is a form of consequentialism, the view that doing "the right thing" is what produces the best state of affairs. Utilitarianism is different... (read more)

I'm seeing fundamental disagreement on what "moral" means.

In the Anglo Saxon tradition, what is moral is what you should or ought to do, where should and ought both entail a debt one has the obligation to pay. Note that this doesn't make morality binary; actions are more or less moral depending on how much of the debt you're paying off. I wouldn't be surprised if this varied a lot by culture, and I invite people to detail the similarities and differences in other cultures they are familiar with.

What I hear from some people here is Utilitarian... (read more)

1SilentCal9y
This makes sense... and the idea of 'praiseworthy/benevolent' shows that Moralos do have the concept of a full ranking. So we could look at this as Moralos having a ranking plus an 'obligation rule' that tells you how good an outcome you're obligated to achieve in a given situation, while Moralps don't accept such a rule and instead just play it by ear. Justifying an obligation rule seems philosophically tough... unless you justify it as a heuristic, in which case you get to think like a Moralp and act like a Moralo, and abandon your heuristic if it seems like it's breaking down. Taking Giving What We Can's 10% pledge is a good example of adopting such a heuristic.
0lmm9y
Maybe, but it's a very common moral intuition, so anything that purports to be a theory of human morality ought to explain it, or at least explain why we would misperceive that the distinction between obligatory and praiseworthy-but-non-obligatory actions exists.
0SilentCal9y
Is heuristic value not a sufficient explanation of the intuition?
0lmm9y
I don't see the heuristic value. We don't perceive people as being binarily e.g. either attractive or unattractive, friendly or unfriendly, reliable or unreliable; even though we often had to make snap judgements about these attributes, on matters of life and death, we still perceive them as being on a sliding scale. Why would moral vs. immoral be different?
0SilentCal9y
It'd be fairer to compare to other properties of actions rather than properties of people; I think moral vs. immoral is also a sliding scale when applied to people. That said, we do seem more attached to the binary of moral vs. immoral actions than, say, wise vs. unwise. My first guess is that this stems from a desire to orchestrate social responses to immoral action. From this hypothesis I predict that binary views of moral/immoral will be correlated with coordinated social responses to same.
0lmm9y
Interesting; that may be a real difference in our intuitions. My sense is that unless I'm deliberately paying attention I tend to think of people quite binarily as either decent people or bad people.
2SilentCal9y
Significantly more than you think of them binarily regarding those other categories? Then it is a real difference. My view of people is that there are a few saints and a few cancers, and a big decent majority in between who sometimes fall short of obligations and sometimes exceed them depending on the situation. The 'saint' and 'cancer' categories are very small. What do your 'good' and 'bad' categories look like, and what are their relative sizes?
0lmm9y
I think of a large population of "decent", who generically never do anything outright bad (I realise this is probably inaccurate, I'm talking about intuitions). There's some variation within that category in terms of how much outright good they do, but that's a lot less important. And then a smaller but substantial chunk, say 10%, of "bad" people, people who do outright bad things on occasion (and some variation in how frequently they do them, but again that's much less important).
-1buybuydandavis9y
There could be Moralos like that, but if we're talking the Anglo Saxon tradition, the obligation ranking is different than the overall personal preference ranking. What you owe is different than what I would prefer. The thought that disturbs me is that the Moralps really only have one ranking, what they prefer. This is what I find so totalitarian about Utilitarianism. Step back from the magic words. We have preferences. We take action based on those preferences. We reward/punish/coerce people based on them acting in accord with those preferences, or acting to ideologically support them, or reward/punish/coerce based on how they reward/punish/coerce on the first two, and up through higher and higher orders of evaluation. So what is obligation? I think it's what we call our willingness to coerce/punish, up through the higher order of evaluation, and that's similarly the core of what makes something a moral preference. If you're not going to punish/coerce, and only reward, that preference looks more like the preference for beautiful people. Is this truly the "Utilitarianism" proposed here? Just rewarding, and not punishing or coercing? I'd feel less creeped out by Utilitarianism if that were so.
1SilentCal9y
Let me zoom out a bit to explain where I'm coming from. I'm not fully satisfied with any metaethics, and I feel like I'm making a not-so-well-justified leap of faith to believe in any morality. Given that that's the case, I'd like to at least minimize the leap of faith. I'd rather have just a mysterious concept of preference than a mysterious concept of preference and a mysterious concept of obligation. So my vision of the utilitarian project is essentially reductionist: to take the preference ranking as the only magical component*, and build the rest using that plus ordinary is-facts. So if we define 'obligations' as 'things we're willing to coerce you to do', we can decide whether X is an obligation by asking "Do we prefer a society that coerces X, or one that doesn't?" *Or maybe even start with selfish preferences and then apply a contractarian argument to get the impartial utility function, or something.
1buybuydandavis9y
I don't think my concept of obligation is mysterious: Social animals evolved to have all sorts of social preferences, and the mechanisms for enforcing those mechanisms, such as impulses toward reward/coercion/punishment. Being conceptual animals, those mechanisms are open to some conceptual programming. Also, those mechanisms need not be weighted identically in all people, so that they exhibit different moral behavior and preferences, like Moralps and Moralos. I think you're making a good start in any project by first taking a reductionist view. What are we really talking about, when we're talking about morality? I think you should do that first, even if your project is the highly conceptually derivative one of sanctioning state power. My project, such as it was, was an egoist project. OK, I don't have to be a slave to moral mumbo jumbo. What now? What's going on with morality? What I and some other egoists concluded was that we had social preferences too. We reward/punish/coerce as well. But starting with a consciousness that my social preferences are to be expected in a social animal, and are mine, to do with what I will, and you have yours, that are unlikely to be identical, leads to different conclusions and behaviors than people who take their social feelings and impulses as universal commands from the universe.
0SilentCal9y
Interesting, our differences are deeper than I expected! Do you feel you have a good grip on my foundations, or is there something I should expand on? Let me check my understanding of your foundations: You make decisions to satisfy your own preferences. Some of these might be 'social preferences', which might include e.g. a preference for fewer malaria deaths in the developing world, which might lead you to want to donate some of your income to charity. You do not admit any sense in which it would be 'better' to donate more of your income than you want to, except perhaps by admitting meta-preferences like "I would prefer if I had a stronger preference for fewer malaria deaths". When you say someone is obligated to do X, you mean that you would prefer that they be coerced to do X. (I hesitate to summarize it this way, though, because it means that if you say they're obligated and I say they aren't, we haven't actually contradicted each other). Is the above a correct description of your approach?
0buybuydandavis9y
It's not just me. This is my model of human moral activity. We're social animals with some built in social preferences, along with other built in preferences. I could come up with a zillion different "betters" where that was the case, but that doesn't mean that I find it better overall according to my values. That's too strong for some cases, but it was my mistake for saying it so categorically in the first place. I can think of a lot of things I consider interpersonal obligations where I wouldn't want coercion/violence used against them in retaliation. I will just assign you a few asshole points, and adjust my behavior accordingly, possibly including imposing costs on you out of spite. That's the thing. The reality of our preferences is that they weren't designed to fit into boxes. Preferences are rich in structure, and your attempt to simplify them to one preference ranking to rule them all just won't adequately model what humans are, no matter how intellectually appealing. We have lots of preference modalities, which have similarities and differences with moral preferences. It tends to be a matter of emphasis and weighting. For example, a lot of our status or beauty preferences function in some way like our moral preferences. Low status entails greater likelihood of punishment, low status rubs off by your failure to disapprove of low status, and both of those occur at higher orders as well - such as if you don't disapprove of someone who doesn't disapprove of low status. In what people call moral concerns, I observe that higher order punishing/rewarding is more pronounced than for other preferences, such as food tastes. If you prefer mint ice cream, it generally won't be held against you, and most people would consider it weird to do so. If you have some disapproved of moral view, it is held against you, whether you engage in the act or not, and it is expected that it will be held against you.
0TheAncientGeek9y
That's almost rule consequentialism.
1mwengler9y
What buybuy said. Plus... Moralps are possibly hypocritical, but it could be that they are just wrong, claiming one preference but acting as if they have another. If I claim that I would never prefer a child to die so that I can buy a new car, and I then buy a new car instead of sending my money to feed starving children in wherever, then I am effectively making incorrect statements about my preferences, OR I am using the word preferences in a way that renders it uninteresting. Preferences are worth talking about precisely because to the extent that they describe what people will actually do. I suspect in the case of starving children and cars, my ACTUAL preference is much more sentimental and much less universal. If I came home one day and laying on my lawn was a starving child, I would very likely feed that child even if this food came from a store I was keeping to trade for a new car. But if this child is around the corner and out of my sight, then its Tesla S time! So Moralps are possibly hypocritical, but certainly wrong at describing their own preferences, IF we insist that preferences are things that dictate our volition.
2Princess_Stargirl9y
Utilitarianism talks about which actions are more moral. It doesn't talk about which actions a person actually "prefers." I think its more moral to donate 300 dollars to charity than to take myself and two friends out for a Holiday diner. Yet I have reservations for Dec 28th. The fact I am actually spending the money on my friends and myself doesn't mean I think this is the most moral things I could be doing. I have never claimed people are required to optimize their actions in the pursuit of improving the world. So why would it be hypocritical for me not to try to maximize world utility.
2mwengler9y
So you are saying: "the right thing to do is donate $300 to charity but I don't see why I should do that just because I think it is the right thing to do." Well once we start talking about the right thing to do without attaching any sense of obligation to doing that thing, I'd like to know what is the point about talking about morality at all. It seems it just becomes another way to say "yay donating $300!" and has no more meaning than that. What I thought were the accepted definitions of the words, saying the moral thing to do is to donate $300 was the same as saying I ought to donate $300. In this definition, discussions of what was moral and what was not really carried more weight than just saying "yay donating $300!"
3Princess_Stargirl9y
I didn't say it was "the right thing" to do. I said it was was moral then what I am actually planning to do. You seem to just be assuming people are required to act in the way they find most moral. I don't think this is a reasonable thing to ask of people. Utilitarian conclusions clearly contain more info than "yay X." Since they typically allow one to compare different positive options as to which is more positive. In addition in many contexts utilitarianism gives you a framework for debating what to do. Many people will agree the primary goal of laws in the USA should be to maximize utility for US citizens/residents as long as the law won't dramatically harm non-residents (some libertarians disagree but I am just making a claim on what people think). Under these conditions utilitarianism tells you what to do. Utilitarianism does not tell you how to act in daily life. Since its unclear how much you should weigh the morality of an action against other concerns.
1lmm9y
A moral theory that doesn't tell you how to act in daily life seems incomplete, at least in comparison to e.g. deontological approaches. If one defines a moral framework as something that does tell you how to act in daily life, as I suspect many of the people you're thinking of do, then to the extent that utilitarianism is a moral framework, it requires extreme self-sacrifice (because the only, or at least most obvious, way to interpret utilitarianism as something that does tell you how to act in daily life is to interpret it as saying that you are required to act in the way that maximizes utility). So on some level it's just an argument about definitions, but there is a real point: either utilitarianism requires this extreme self-sacrifice, or it is something substantially less useful in daily life than deontology or virtue ethics.
0gjm9y
Preferences of this sort might be interesting not because they describe what their holders will do themselves, but because they describe what their holders will try to get other people to do. I might think that diverting funds from luxury purchases to starving Africans is always morally good but not care enough (or not have enough moral backbone, or whatever) to divert much of my own money that way -- but I might e.g. consistently vote for politicians who do, or choose friends who do, or argue for doing it, or something.
0mwengler9y
Your comment reads to me like a perfect description of hypocrisy. Am I missing something?
3gjm9y
Nope. Real human beings are hypocrites, to some extent, pretty much all the time. But holding a moral value and being hypocritical about it is different from not holding it at all, so I don't think it's correct to say that moral values held hypocritically are uninteresting or meaningless or anything like that.

"Utilitarianism" for many people includes a few beliefs that add up to this requirement.

  • 1) Utility of all humans is more-or-less equal in importance.
  • 2) it's morally required to make decisions that maximize total utility.
  • 3) there is declining marginal utility for resources.

Item 3 implies that movement of wealth from someone who has more to someone who has less increases total utility. #1 means that this includes your wealth. #2 means it's obligatory.

Note that I'm not a utilitarian, and I don't believe #1 or #2. Anyone who actually does believe these, please feel free to correct me or rephrase to be more accurate.

0Lukas_Gloor9y
This sounds like preference utilitarianism, the view that what matters for a person is the extent to which her utility function ("preferences") is fulfilled. In academic ethics outside of Lesswrong, "utilitarianism" refers to a family of ethical views, of which the most commonly associated one is Bentham's "classical utilitarianism", where "utility" is very specifically defined as "suffering minus happiness" that a person experiences over time.
6jefftk9y
I'm not seeing where in Dagon's comment they indicate preference utilitarianism vs (ex) hedonic?
1Lukas_Gloor9y
I see what you mean. Why I thought he meant preference: 1) talks about "utility of all humans", whereas a classical utilitarian would more likely have used something like "well-being". However, you can interpret is as a general placeholder for "whatever matters". 3) is also something that you mention in economics usually, associated with preference-models. Here again, it is true that diminishing marginal utility also applies for classical utilitarianism.
0Princess_Stargirl9y
I know of many people who endorse claims 1 and 3. But I know of no one who claims to believe 2. Am I just misinformed about people's beliefs? Lesswrong is well known for being connected to utilitarianism. Do any prominent lesswrongers explicitly endorse 2? edit: My point was I know many people who endorse something like the view in this comment: 2') One decision is morally better than another if it yields greater expected total utility.
9buybuydandavis9y
Then you don't know any utilitarians. Without 2, you don't have a moral theory. La Wik:

I think someone is still a utilitarian if instead of 2 they believe something like

2') One decision is morally better than another if it yields greater expected total utility.

(In particular, I don't think it's necessary for a moral theory to be based on a notion of moral requirement as opposed to one of moral preference.)

2[anonymous]9y
Um, what's the difference?
6ZankerH9y
It's possible to believe some action is morally better than another without feeling it's required of you to do it.
3DaFranker9y
As ZankerH said, it leaves out the "required to make" part. Also, gjm's particular formulation of 2' makes a statement about comparisons between two given decisions, not a statement about the entire search space of possible decisions.
1gjm9y
Exactly what ZankerH and DaFranker said. You could augment a theory consisting of 1, 2', and 3 with further propositions like "It is morally obligatory to do the morally best thing you can on all occasions" or (after further work to define the quantities involved) less demanding ones like "It is morally obligatory to act so as not to decrease expected total utility" or "It is morally obligatory to act in a way that falls short of the maximum achievable total utility by no more than X". Or you could stick with 1,2',3 and worry about questions like "what shall I do?" and "is A morally better than B?" rather than "is it obligatory to do A?". After all, most of the things we do (even ones explicitly informed by moral considerations) aren't simply a matter of obeying moral obligations.
0Jiro9y
If you don't use the "required to make" part, then if you tell me "you should do ___ to maximize utility" I can reply "so what?" It can be indistinguishable, in terms of what actions it makes me take, from not being a utilitarian. Furthermore, while perhaps I am not obligated to maximize total utility all the time, it's less plausible that I'm not obligated to maximize it to some extent--for instance, to at least be better at utility than someone we all think is pretty terrible, such as a serial killer. And even that limited degree of obligation produces many of the same problems as being obligated all the time. For instance, we typically think a serial killer is pretty terrible even if he gives away 90% of his income to charity. Am I, then, obliged to be better than such a person? If 20% of his income saves as many lives as are hurt by his serial killing, and if we have similar incomes, that implies I must give away at least 70% of my income to be better than him.
0gjm9y
If I tell you "you are morally required to do X", you can still reply "so what?". One can reply "so what?" to anything, and the fact that a moral theory doesn't prevent that is no objection to it. (But, for clarity: what utilitarians say and others don't is less "if you want to maximize utility, do " than "you should do because it maximizes utility". It's not obvious to me which of those you meant.) A utilitarian might very well say that you are -- hence my remark that various other "it is morally obligatory to ..." statements could be part of a utilitarian theory. But what makes a theory utilitarian is not its choice of where to draw the line between obligatory and not-obligatory, but the fact that it makes moral judgements on the basis of an evaluation of overall utility. I think it will become clear that this argument can't be right if you consider a variant in which the serial killer's income is much larger than yours: the conclusion would then be that nothing you can do can make you better than the serial killer. What's gone wrong here is that when you say "a serial killer is terrible, so I have to be better than he is" you're evaluating him on a basis that has little to do with net utility, whereas when you say "I must give away at least 70% of my income to be better than him" you're switching to net utility. It's not a big surprise if mixing incompatible moral systems gives counterintuitive results. On a typical utilitarian theory: * the wealthy serial killer is producing more net positive utility than you are * he is producing a lot less net positive utility than he could by, e.g., not being a serial killer * if you tried to imitate him you'd produce a lot less net positive utility than you currently do and the latter two points are roughly what we mean by saying he's a very bad person and you should do better. But the metric by which he's very bad and you should do better is something like "net utility, relative to what you're in a position to prod
0Jiro9y
But for the kind of utilitarianism you're describing, if you tell me "you are morally required to do X", I can say "so what" and be correct by your moral theory's standards. I can't do that in response to anything.
0gjm9y
What do you mean by "correct"?
0Jiro9y
Your theory does not claim I ought to do something different.
0gjm9y
It does claim something else would be morally better. It doesn't claim that you are obliged to do it. Why use the word "ought" only for the second and not the first?
0Jiro9y
Because that is what most English-speaking human beings mean by "ought".
0gjm9y
It doesn't seem that way to me. It seems to me that "ought" covers a fairly broad range of levels of obligation, so to speak; in cases of outright obligation I would be more inclined to use "must" than "ought".
0Jiro9y
I don't think that saves it. In my scenario, me and the serial killer have similar incomes, but he kills people, and he also gives a lot of money to charity. I am in a position to produce what he produces.
0gjm9y
Which means that according to strict utilitarianism you would do better to be like him than to be as you are now. Better still, of course, to do the giving without the mass-murdering. But the counterintuitive thing here isn't the demandingness of utilitarianism, but the fact that (at least in implausible artificial cases) it can reckon a serial killer's way of life better than an ordinary person's. What generates the possibly-misplaced sense of obligation is thinking of the serial killer as unusually bad when deciding that you have to do better, and then as unusually good when deciding what it means to do better. If you're a utilitarian and your utility calculations say that the serial killer is doing an enormous amount of good with his donations, you shouldn't also be seeing him as someone you have to do more good than because he's so awful.
0Jiro9y
What generates the sense of obligation is that the serial killer is considered bad for reasons that have nothing to do with utility, including but not limited to the fact that he kills them directly (rather than using a computer, which contributes to global warming, which hurts people) and actively (he kills people rather than keeping money that would have saved their life). The charity-giving serial killer makes it obvious that the utilitarian assumption that more utility is better than less utility just isn't true, for what actual human beings mean by good and bad.
1DanielFilan9y
I claim to believe 2! I think that we do have lots of moral obligations, and basically nobody is satisfying all of them. It probably isn't helpful to berate people for not meeting all of their moral obligations (since it's really really hard to do so, and berating people isn't likely to help), and that there is room to do better and worse even when we don't meet our moral obligations, but neither of these facts mean that we don't have a moral obligation to maximise expected moral-utility.

If you want to completely optimize your life for creating more global utilons then, yes, utilitarianism requires extreme self-sacrifice. The time you spend playing that video-game or hanging out with friends netted you utility/happiness, but you could have spend that time working and donating the money to an effective charity. That tasty cheese you ate probably made you quite happy, but it didn't maximize utility. Better switch to the bare minimum you need to work the highest-paying job you can manage and give all the money you don't strictly need to an ef... (read more)

5buybuydandavis9y
People generally don't manage that. People learn what they can and can't do in Ranger School. This is another case where it just seems there are multiple species of homo sapiens. Or maybe I'm just a Martian. When other people say "X is moral", they mean "I will say that 'X is moral', and will occasionally do X"? I can almost make sense of it, if they're all just egoists, like me. My moral preferences are some of my many preferences. Sometimes I indulge my moral preferences, and sometimes my gustatory preferences. Moral is much like "yummy". Just because something is "yummy", it doesn't I plan on eating it all day, or that I plan to eat all day. But that is simply not my experience on how the term "moral" is generally used. Moral seems to mean "that's my criteria to judge what I should and shouldn't do". That's how everyone talks, although never quite how everyone does. Has there been an egoist revolution, and I just never realized it? I think people have expressed before being "The Occasional Utilitarian" (my term), devoting some time slices to a particular moral theory. And other times, not. "I'm a utilitarian, when the mood strikes me". It reminds me of a talk I had with some gal years ago about here upcoming marriage. "Oh, we'll never get divorced, no way, no how, but if we do..." What's going through a person's head when they say things like that? It's just bizarre to me. Years later, I was on a date at a sex show and bumped into her. She was divorced.
2MathiasZaman9y
Knowing what is moral and acting on what is moral are two different things. Acting on what is moral is often hard, and people aren't known for their propensity to do hard things. The divide between "I know what is moral" and "I act on what I know to be moral" exists in most moral theories with the possible exception (as far as I know, which isn't all that far) of egoism.
1TheAncientGeek9y
Moral, or rather immoral, can also be used to mean "should be illegal". [*] Inasmuch as most people obey the law, there is quite a lot of morality going on. Your analysis basically states that there isn't much Individual, supererogatory moral action going on. That's true. People aren't good at putting morality into practice,, which is why morality needs to buttressed by things like legal systems. But there is a lot of unflashy morality going on...trading fairly, refraining from violence and so on. So the conclusion that people are rarely moral doesn't follow. [*] This comment should not be taken to mean that in the opinion of the present author, everything which is illegal in every and any society is ipso facto immoral.
4Lumifer9y
Can, but not necessarily should. Societies which move sufficiently far in that direction are called "totalitarian".
1TheAncientGeek9y
And there is another too far in the other direction, although no one wants to mention that.
1Lumifer9y
Why not? The dimension that we are talking about is the sync -- or the disconnect -- between morality and legality. If this disconnect is huge, the terms used would be "unjust" and "arbitrary". Historically, such things happened when a society was conquered by someone with a significantly different culture.
1TheAncientGeek9y
What I was talking about was the larger but less noticeable part of the iceberg of morality.
1Lumifer9y
If you, perhaps, could be more explicit..?
1TheAncientGeek9y
Moral, or rather immoral, can also be used to mean "should be illegal". [*] Inasmuch as most people obey the law, there is quite a lot of morality going on. Your analysis basically states that there isn't much Individual, supererogatory moral action going on. That's true. People aren't good at putting morality into practice,, which is why morality needs to buttressed by things like legal systems. But there is a lot of unflashy morality going on...trading fairly, refraining from violence and so on. So the conclusion that people are rarely moral doesn't follow. [*] This comment should not be taken to mean that in the opinion of the present author, everything which is illegal in every and any society is ipso facto immoral.
0Lumifer9y
Ah, I see.
4fubarobfusco9y
What does this mean if we taboo "illegal"? As far as I can tell, it means something like "If you do what you shouldn't do, someone should come around and do terrible things to you, against which you will have no recourse."
1TheAncientGeek9y
That's sort of true, but heavily spun. If you kill someone, what recourse do they have...except to live in a society that discourages murder by punishing murderers? Perhaps you were taking something like drug taking as a central example of "what you should not do".
1Lukas_Gloor9y
That's a great quote! Despite its brevity it explains a big part of what I used hundreds of words to explain:)

It's not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:

http://en.wikipedia.org/wiki/Demandingness_objection

http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/

The way I think of the complication is that these moral decisions are not about answering "what should I do?" but "what can I get myself to do?"

If someone on the street asks you "what is the right thing for me to do today?" you probably should not answer "donate all of your money to charity beyond what you need to survive." This advice will just get ignored. More conventional advice that is less likely to get ignored ultimately does more for the common good.

Moral decisions that you make for yourself are a lot like gi... (read more)

For me utilitarianism means maximizing a weighted sum of everyone's utility, but the weights don't have to be equal. If you give yourself a high enough weight, no extreme self-sacrifice is necessary. The reason to be a utilitarian is that if some outcome is not consistent with it, it should be possible to make some people better off without making anyone worse off.

5jefftk9y
This is not a standard usage of the term "utilitarianism". You can have a weighting, for example based on capacity for suffering, but you can't weight yourself more just because you're you and call it utilitarianism.
6James_Miller9y
But if you have to give yourself and your children the same weights as strangers than almost no one is a utilitarian.
6DanielFilan9y
I think that there's a difference between "nobody fulfills their moral obligations according to utilitarianism, or even tries very hard to" and "nobody believes that utilitarianism is the correct moral theory". People are motivated by lots of things other than what they believe the correct moral theory is.

As far as I understand it, the text quoted here is implicitly relying on the social imperative "be as moral as possible". This is where the "obligatory" comes from. The problem here is that the imperative "be as moral as possible" gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one's metaphorical shoulders.

Of course, in reality, utilitarianism imp... (read more)

0Jiro9y
That is prone to the charity-giving serial killer problem. If someone kills people, gives 90% to charity, and just 20% is enough to produce utility that makes up for his kills, then pretty much any such moral standard says that you must be better than him, yet he's producing a huge amount of utility and to be better than him from a utilitarian standpoint, you must give at least 70%. If you avoid utilitarianism you can describe being "better than" the serial killer in terms other than producing more utility; for instance, distinguishing between deaths resulting from action and from inaction.
1ChaosMote9y
Why does this need to be the case? I would posit that the only paradox here is that our intuitions find it hard to accept the idea of a serial killer being a good person, much less a better person than one need strive to be. This shouldn't be that surprising - really, it is just the claim that utilitarianism may not align well with our intuitions. Now, you can totally make the argument that not aligning with our intuitions is a flaw of utilitarianism, and you would have a point. If your goal in a moral theory is a way of quantifying your intuitions about morality, then by all means use a different approach. On the other hand, if your goal is to reason about actions in terms of their cumulative impact on the world around you, then utilitarianism presents the best option, any you may just have to bite the bullet when it comes to your intuitions.
0[anonymous]9y
Apparently retracting doesn't work the way I thought. Oops.

What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.

If you think in terms of consequen... (read more)

Utilitarianism doesn't have anywhere to place a non arbitrary level of obligation except at zero and maximum effort. The zero is significant, because it means utilitarianism can't bootstrap obligation .... I think that is the real problem, not demandingness.

As others have stated, obligation isn't really part of utilitarianism. However, if you really wanted to use that term, one possible way to incorporate it is to ask what would the xth percentile of people do in this situation (where the people are ranked in terms of expected utility) given that everyone has the same information and use that as a boundary to the label "obligation."

As an aside, there is a thought experiment called the "veil of ignorance." Although it is not, strictly speaking, called utilitarianism, you can view it that wa... (read more)

I think you have to look at utilitarianism in a question of, "What does the best good for the greatest amount of people that is both effective and efficient?" That means that sacrifice may be a means to an end in order to achieve that greatest good for the greatest amount of people. The sacrifice is that actions that disproportionately disadvantage, objectify, or exploit people should not be taken. Those that benefit the greatest number should. Utilitarianism is all about greatest good. I don't think moral decisions have much place anywhere outsi... (read more)

Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.

As I understand it, and in my just-made-up-now terminology, there are two different kinds of utilitarianism, Normative, and Descriptive. In Normative, you try to figure out the best possible action and you must do that action. In Descriptive, you don't have to always do the best possible action if you don't want to, but you're still trying to make the most good out of what you're doing. For example, consider the following hypothetical actions:

  • Get a high-paying job and donate all of my earnings except the bare minimum necessary to survive to effective

... (read more)

The word "utilitarianism" technically means something like, "an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function". However, when most people think of utilitarianism, they usually have a very specific utility function in mind. Taken together, the algorithm and the function do indeed imply certain "ethical obligations", which are somewhat tautologically defined as "doing whatever maximizes this utility function".

In general, the word "u... (read more)

an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function

That's not how the term "utilitarianism" is used in philosophy. The utility function has to be agent neutral. So a utility function where your welfare counts 10x as much as everyone else's wouldn't be utilitarian.