My view, and a lot of other people here seem to also be getting at this, is that the demandingness objection comes from a misuse of utilitarianism. People want their morality to label things 'permissible' and 'impermissible', and utilitarianism doesn't natively do that. That is, we want boolean-valued morality. The trouble is, Bentham went and gave us a real-valued one. The most common way to get a bool out of that is to label the maximum 'true' and everything else false, but that doesn't give a realistically human-followable result. Some philosophers have worked on 'satisficing consequentialism', which is a project to design a better real-to-bool conversion, but I think the correct answer is to learn to use real-valued morality.
There's some oversimplification above (I suspect people have always understood non-boolean morality in some cases), but I think it captures the essential problem.
It basically depends whether you're a maximising utilitarian or a scalar utilitarian. The former says that you should do the best thing. The latter is less harsh in that it just says that better actions are better without saying that you necessarily have to do the best one.
The main difference between a utility function based approach is that there is no concept of "sufficient effort". Every action gets an (expected) utility attached to it. Sending £10 to an efficient charity is X utilitons above not doing so; but selling everything you own to donate to the charity is (normally) even higher.
So I think the criticism is accurate, in that humans almost never achieve perfection following utility; there's always room for more effort, and there's no distinction between actions that are "allowed" versus "req...
I thought about this question a while ago and have been meaning to write about it sometime. This is a good opportunity.
Terminology: Other commenters are pointing out that there are differing definitions of the word "utilitarianism". I think it is clear that the article in question is talking about utilitarianism as an ethical theory (or rather, a family of ethical theories). As such, utilitarianism is a form of consequentialism, the view that doing "the right thing" is what produces the best state of affairs. Utilitarianism is different...
I'm seeing fundamental disagreement on what "moral" means.
In the Anglo Saxon tradition, what is moral is what you should or ought to do, where should and ought both entail a debt one has the obligation to pay. Note that this doesn't make morality binary; actions are more or less moral depending on how much of the debt you're paying off. I wouldn't be surprised if this varied a lot by culture, and I invite people to detail the similarities and differences in other cultures they are familiar with.
What I hear from some people here is Utilitarian...
"Utilitarianism" for many people includes a few beliefs that add up to this requirement.
Item 3 implies that movement of wealth from someone who has more to someone who has less increases total utility. #1 means that this includes your wealth. #2 means it's obligatory.
Note that I'm not a utilitarian, and I don't believe #1 or #2. Anyone who actually does believe these, please feel free to correct me or rephrase to be more accurate.
I think someone is still a utilitarian if instead of 2 they believe something like
2') One decision is morally better than another if it yields greater expected total utility.
(In particular, I don't think it's necessary for a moral theory to be based on a notion of moral requirement as opposed to one of moral preference.)
If you want to completely optimize your life for creating more global utilons then, yes, utilitarianism requires extreme self-sacrifice. The time you spend playing that video-game or hanging out with friends netted you utility/happiness, but you could have spend that time working and donating the money to an effective charity. That tasty cheese you ate probably made you quite happy, but it didn't maximize utility. Better switch to the bare minimum you need to work the highest-paying job you can manage and give all the money you don't strictly need to an ef...
It's not just people in general that feel that way, but also some moral philosophers. Here are two related link about the demandingness objection to utilitarianism:
http://en.wikipedia.org/wiki/Demandingness_objection
http://blog.practicalethics.ox.ac.uk/2014/11/why-i-am-not-a-utilitarian/
The way I think of the complication is that these moral decisions are not about answering "what should I do?" but "what can I get myself to do?"
If someone on the street asks you "what is the right thing for me to do today?" you probably should not answer "donate all of your money to charity beyond what you need to survive." This advice will just get ignored. More conventional advice that is less likely to get ignored ultimately does more for the common good.
Moral decisions that you make for yourself are a lot like gi...
For me utilitarianism means maximizing a weighted sum of everyone's utility, but the weights don't have to be equal. If you give yourself a high enough weight, no extreme self-sacrifice is necessary. The reason to be a utilitarian is that if some outcome is not consistent with it, it should be possible to make some people better off without making anyone worse off.
As far as I understand it, the text quoted here is implicitly relying on the social imperative "be as moral as possible". This is where the "obligatory" comes from. The problem here is that the imperative "be as moral as possible" gets increasingly more difficult as more actions acquire moral weight. If one has internalized this imperative (which is realistic given the weight of societal pressure behind it), utilitarianism puts an unbearable moral weight on one's metaphorical shoulders.
Of course, in reality, utilitarianism imp...
What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.
If you think in terms of consequen...
Utilitarianism doesn't have anywhere to place a non arbitrary level of obligation except at zero and maximum effort. The zero is significant, because it means utilitarianism can't bootstrap obligation .... I think that is the real problem, not demandingness.
As others have stated, obligation isn't really part of utilitarianism. However, if you really wanted to use that term, one possible way to incorporate it is to ask what would the xth percentile of people do in this situation (where the people are ranked in terms of expected utility) given that everyone has the same information and use that as a boundary to the label "obligation."
As an aside, there is a thought experiment called the "veil of ignorance." Although it is not, strictly speaking, called utilitarianism, you can view it that wa...
I think you have to look at utilitarianism in a question of, "What does the best good for the greatest amount of people that is both effective and efficient?" That means that sacrifice may be a means to an end in order to achieve that greatest good for the greatest amount of people. The sacrifice is that actions that disproportionately disadvantage, objectify, or exploit people should not be taken. Those that benefit the greatest number should. Utilitarianism is all about greatest good. I don't think moral decisions have much place anywhere outsi...
Utilitarianism is a normative ethical theory. Normative ethical theories tell you what to do (or, in the case of virtue ethics, tell you what kind of person to be). In the specific case of utilitarianism, it holds that the right thing to do (i.e. what you ought to do) is maximize world utility. In the current world, there are many people who could sacrifice a lot to generate even more world utility. Utilitarianism holds that they should do so, therefore it is demanding.
As I understand it, and in my just-made-up-now terminology, there are two different kinds of utilitarianism, Normative, and Descriptive. In Normative, you try to figure out the best possible action and you must do that action. In Descriptive, you don't have to always do the best possible action if you don't want to, but you're still trying to make the most good out of what you're doing. For example, consider the following hypothetical actions:
Get a high-paying job and donate all of my earnings except the bare minimum necessary to survive to effective
The word "utilitarianism" technically means something like, "an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function". However, when most people think of utilitarianism, they usually have a very specific utility function in mind. Taken together, the algorithm and the function do indeed imply certain "ethical obligations", which are somewhat tautologically defined as "doing whatever maximizes this utility function".
In general, the word "u...
an algorithm for determining whether any given action should or should not be undertaken, given some predetermined utility function
That's not how the term "utilitarianism" is used in philosophy. The utility function has to be agent neutral. So a utility function where your welfare counts 10x as much as everyone else's wouldn't be utilitarian.
What does the whole concept of talking about morality or human motivation using the terms of utilitarianism and consequentialism mean? It means restricting oneself to using the terms and rules that are used to derive new sentences using those terms that are used in the moral philosophy of utilitarianism and consequentialism. Once you restrict your vocabulary and the rules that are used to form sentences using this vocabulary, you usually restrict what conclusions you can derive using the terms that are in this vocabulary.
If you think in terms of consequentialism, what operations can you do? You can assign utilities to different world states (depending on the flavour of consequentialism you are using, you might have further restrictio how you can do this) and you can compare them. Or, in another version, you cannot assign the utilities directly, but you can impose a partial order binary relations on pairs of world states. That's all. If you add something else, then you are no longer talking using just consequentialist terms. For example, take the trolley problem. Given the way the dilemma is usually described, there is not a lot of sentences that you can derive using consequentialist terms. The whole framing of the problem gives you just two world states and asks you to assign utilities to them.
Now, you can use the terms of consequentialist moral philosophy to talk about all human motivation. If your preferences satisfy certain axioms, then Von Neumann–Morgenstern utility theorem allows that. Let's denote this way of thinking as (1).
Or you can use terms of consequentialist moral philosophy in much more restricted domain. Most people usually use those terms only to talk about things they consider to be related to morality (a question how some problems become discussed using the terms of moral philosophy and considered to be moral problems while other problems don't is an interesting, but quite distinct question). When they talk about all human motivation, they use terms that come from outside the consequentialist moral philosophy. Let's denote this way of thinking as (2).
Now, what do you use to describe all human motivation? Just the terms of consequentialist moral philosophy or other terms as well? Let's compare.
It also appears to imply that donating all your money to charity beyond what you need to survive isn’t just admirable but morally obligatory.
and
But where does the "obligatory" part come in. I don't really how its obvious what, if any, ethical obligations utilitarianism implies.
Now, I know very little about what kind of theory of morality and human motivation you or Chris Hallquist support. Therefore, my next paragraph is based on the impressions I got reading those two quotes.
I think that your confusion comes from the fact that you think that Chris Hallquist is using the terms of consequentialist moral philosophy in pretty much the same way you do. However it seems to me that Chris Hallquist is using them in (1) way (or close to that), whereas you are closer to the (2) way of thinking. And when you think about all human motivation, then you use various terms and concepts, some of which are not from the vocabulary of consequentialism.
The very fact that you can ask such question "But where does the "obligatory" part come in. I don't really how its obvious what, if any, ethical obligations utilitarianism implies." implies that you are using terms that come from outside of consequentialism, because remember: in consequentialism you can only assign utilities to the world states and compare them, that's all. The very fact that the idea that you can compare the utilities of two world states, find the utility of the world_state_1 is greater than the utility of world_state_2 and they disobey this comparison makes sense to you means that when thinking about human motivation you are using (perhaps implicitly) concepts that come from somewhere else than consequentialism [1]. There is no way you can derive disobedience using the operations of consequentialism. Therefore, if you use the terms of consequentialism to describe all human motivation ((1) way of thinking), it cannot not be obligatory. I think that Chris Hallquist is trying to implicitly convey this idea. Using (1) way of thinking (which I think Chris Hallquist is using), if your utility function assigns utilities to world states in such a way that the world states that are achievable only by donating a lot of money to charity (and not any other way) are preferable to other world states, then you are by definition motivated to donate as much money to charity as possible. Now, isn't that a bit tautological? If you use terms such as utility function to describe all human motivation, why are such encouragements to donate to charity even needed? Wouldn't you already be motivated to donate a lot of income to charity? I think that what a hypothetical utilitarian person who says such things (a hypothetical person whose ideas about utilitarianism Chris Hallquist is channeling) would be trying to do is to modify your de facto utility function (if we are using this term to describe and model all human motivation, assuming it's possible) by appealing to what kind of de facto utility function you would like to have or you like to imagine yourself having. I.e. what would you like to be motivated by? The said hypothetical utilitarian person would like your motivation to be such that it could be modeled by utility function which assigns higher utilities to world states that (in this particular case) are achievable by donating a lot of money to charity.
[1] Of course, there is another possibility. That you talk about certain things using the terms such as utility function, and all your motivations (including, obviously, subconscious ones) can be modeled by a utility function, but those two are different, therefore impression of disobedience comes from the fact that the conclusions that can be modeled as derived using the second utility function are different from conclusions that you derive using the first one.
Many thoughtful people identity as utilitarian [...] yet do not think people have extreme obligations.
My impression is that most people who identify as utilitarians do not use terms of consequentialist moral philosophy to describe all their human motivation. They use them when they talk about problems and situations that are considered to be related to morality. For example, when they read about something and recognize it as a moral problem, they start using those terms. But their whole apparatus of human motivation (which may or may not be modeled as a utility function) is much larger than that and their utilitarianism (i.e. utility function as they are able to consciously think about it) doesn't cover all of it, because that would be too difficult. The most you can say that they think about various situations and what should they do if they found themselves to be in them (e.g. what if you find yourself in a trolley dilemma, and others), precompute and cache the answers, and (if their memory, courage and willpower don't fail them) perform those actions when those situations arise.
Chist Hallquist wrote the following in an article (if you know the article please, please don't bring it up, I don't want to discuss the article in general):
"For example, utilitarianism apparently endorses killing a single innocent person and harvesting their organs if it will save five other people. It also appears to imply that donating all your money to charity beyond what you need to survive isn’t just admirable but morally obligatory. "
The non-bold part is not what is confusing me. But where does the "obligatory" part come in. I don't really how its obvious what, if any, ethical obligations utilitarianism implies. given a set of basic assumptions utilitarianism lets you argue whether one action is more moral than another. But I don’t see how its obvious which, if any, moral benchmarks utilitarianism sets for “obligatory.” I can see how certain frameworks on top of utilitarianism imply certain moral requirements. But I do not see how the bolded quote is a criticism of the basic theory of utilitarianism.
However this criticism comes up all the time. Honestly the best explanation I could come up with was that people were being unfair to utilitarianism and not thinking through their statements. But the above quote is by HallQ who is intelligent and thoughtful. So now I am genuinely very curious.
Do you think utilitarianism really require such extreme self sacrifice and if so why? And if it does not require this why do so many people say it does? I am very confused and would appreciate help working this out.
edit:
I am having trouble asking this question clearly. Since utilitarianism is probably best thought of as a cluster of beliefs. So its not clear what asking "does utilitarianism imply X" actually means. Still I made this post since I am confused. Many thoughtful people identity as utilitarian (for example Ozy and theunitofcaring) yet do not think people have extreme obligations. However I can think of examples where people do not seem to understand the implications of their ethical frameowrks. For example many Jewish people endorse the message of the following story:
Rabbi Hilel was asked to explain the Torah while standing on one foot and responded "What is hateful to you, do not do to your neighbor. That is the whole Torah; the rest is the explanation of this--go and study it!"
The story is presumably apocryphal but it is repeated all the time by Jewish people. However its hard to see how the story makes even a semblance of sense. The torah includes huge amounts of material that violates the "golden Rule" very badly. So people who think this story gives even a moderately accurate picture of the Torah's message are mistaken imo.