Utilitarianism seems to indicate that the greatest good for the most people generally revolves around their feelings.  A person feeling happy and confident is a desired state, a person in pain and misery is undesirable.

But what about taking selfish actions that hurt another person's feelings?  If I'm in a relationship and breaking up with her would hurt her feelings, does that mean I have a moral obligation to stay with her?  If I have an employee who is well-meaning but isn't working out, am I morally allowed to fire him?  Or what about at a club?  A guy is talking to a woman, and she's ready to go home with him.  I could socially tool him and take her home myself, but doing so would cause him greater unhappiness than I would have felt if I'd left them alone.

In a nutshell, does utilitarianism state that I am morally obliged to curb my selfish desires so that other people can be happy?

New Comment
28 comments, sorted by Click to highlight new comments since:

In the above examples, there may well be more net harm than gain from staying in an unpleasant relationship or firing a problematic employee. It's pretty case-by-case in nature, and you're not required to ignore your own feelings entirely. If not, yes, utilitarianism would say you'd be "wrong" for indulging yourself at the expense of others.

According to utilitarianism as "the greatest good for the greatest number," absolutely, you should take actions that maximize the well-being of all.

The terminology is mildly confusing here because we often use "utilitarianism" to refer to any sort of consistent consequentialism, and consequentialism can be selfish. But classic utilitarianism is very much altruistic.

Altruistic in that others count equally. Your happiness etc still counts.

As much as 7 billion other people in your calculations. 1/7billion - not very much.

Each of those 7 billion will be at 7e-9 equivalently; regardless of how much it is in comparison to the sum of all of them, each value is equal.

Each of those 7 billion will be at 7e-9 equivalently

Didn't I say that?

regardless of how much it is in comparison to the sum of all of them

That's quite an important point to gloss over. You're "allowed" to cater to yourself 1 part in 7 billion.

How often are you in situations where you have two choices: option A gives you 1000000 units of happiness, and option B gives everyone on this planet 1 unit of happiness?

According to utilitarianism, I guess it would be better to choose B, but I think that in real life 99.9% of people never face the choice B, because for most of us, our actions simply don't influence the whole planet. -- Perhaps if Bill Gates decided to make Microsoft Windows an open-source software, that would be an example of an action likely to benefit billions of people, at his own expense. Assuming that this choice would only cost him, say $1000000, I guess it would be an ethical thing to do.

In real life, most people on this planet will be in completely the same situation regardless of what I do.

How often are you in situations where you have two choices: option A gives you 1000000 units of happiness, and option B gives everyone on this planet 1 unit of happiness?

Probably rarely. But you and I are in a position where someone, somewhere, would get more units of happiness from our efforts than we would. You could be working night and day, without let up, to feed the hungry, to vaccinate the unprotected, to shelter the exposed,..

That is your lot in life, under the Utilitarian God. That is the debt you owe. And if you don't pay it, day after day after day without let, without any hope of relief, you're evil in his eyes, as you are in your own, as long as you choose to worship him and are consistent in your mind about the debt of servitude you owe.

Trying to maximize ethical behavior under utilitarianism would probably mean getting as much money as you can, and giving almost all (as much as you can, to remain able to do your job) to the most efficient charity. You can spend money on yourself only as much as is necessary to keep the process running; same about free time.

You are correct about the de facto servitude. No excuse for luxuries while someone else is suffering. (Except if you could show that enjoying the luxury increases your productivity enough to balance the spending.)

How often are you in situations where you have two choices: option A gives you 1000000 units of happiness, and option B gives everyone on this planet 1 unit of happiness?

You refrain from buying something, keeping its market price ever-so-slightly lower than if you bought it, and allowing a bunch of people on the margin to afford it who otherwise couldn't?

(I am not an economist, so there might be something wrong with this.)

[-]lmm00

I think that's a fallacy; humans aren't good at adding up large numbers of small utilities. But by your logic e.g. the "salami-slice fraud" (stealing 0.1 cent from everyone on the planet) would be ethical - it increases your own happiness, and has no effect on everyone else.

If it really had absolutely no effect, then I guess our moral duty would be to steal that money and give it to efficient charity.

Just because humans are not good at observing something, that does not mean it doesn't exist. Sure, in real life, the effects of losing 0.1 cent are invisible to humans, and probably the observation itself would be more costly than the 0.1 cent. But do it repeatedly, and the effects start being observable. Also, there is a problem of transaction costs.

Rational utilitarianism means maximizing your own expected utility. (Technically from the gene's perspective; so caring for your children is selfish.) Social contracts (voting, laws against killing, etc) are just the game theoretical result of everyone acting selfishly.

It's about selfishness not altruism.

Yes. You are morally obliged to weigh your desires against the anticipated effects of your actions. However, this isn't usually (for me) a matter of favoring other people over myself, it's more favoring long-term me over short-term me. I want to live in a better world, and that means taking actions that improve it.

Breaking up with someone gently and firmly now is far better than breaking up horribly after years of misery. For both parties, and for both of your future partners. Easy win.

Likewise terminating an employee who's not working out (when that's not likely to change) - it's a long-term win for you, their coworkers (even if they don't know it), and sometimes for the former employee.

Each case is solved by what society seems to have mutually concluded:

  • Breaking up hurts, but being "strung along" is worse. I myself have advised coming clean due to the ever-more extreme pain resulting from the inevitable breakup. In the end, your ex-partner will be better off when they get over you.
  • Management is an unsolved problem in society, but the co-workers of that employee will almost certainly conclude that he did not deserve to be fired. Managers themselves will often agree with this assessment, but somehow reach the conclusion that it needed to happen regardless. Perhaps his happiness is outweighed by the distributed quality of the company's income and the remaining employees? Management often does not make its reasoning entirely clear.
  • Stealing someone else's partner is considered to be a very bad practice. It is also existentially risky.

Breaking up hurts, but being "strung along" is worse. I myself have advised coming clean due to the ever-more extreme pain resulting from the inevitable breakup. In the end, your ex-partner will be better off when they get over you.

Is this obviously true? Note the underlying assumption that you would break up eventually. Suppose you're dating someone, and the relationship gives you 1 hedon and them 3 hedons. You discover you have another option that gives you 2 hedons, but their next best opportunity gives them 1 hedon. (Suppose, to make things easy, those two other options are involved in a relationship themselves, which gives each of them 1 hedon.)

If you just sum hedons, you should stay with your current partner. But that doesn't maximize your hedons, and the locally optimal move is to break up and date the other option.

Another consideration: suppose you're dating someone with suicidal tendencies, and you know that a major immediate cause of suicide attempts is the end of a relationship, but you were unaware of their suicidal tendencies when you started dating. You're pretty sure that you will be less happy in this relationship than other options you have, but think that they will pose a serious risk to themselves if you break up with them. To what degree can they manufacture an obligation for you to provide emotional support to them?

Management is an unsolved problem in society, but the co-workers of that employee will almost certainly conclude that he did not deserve to be fired.

Both of these strike me as contentious. Few people are satisfied when their coworkers are incompetent or slack off, and we know quite a bit about effective management.

Perhaps his happiness is outweighed by the distributed quality of the company's income and the remaining employees? Management often does not make its reasoning entirely clear.

Certainly there must be cases where the option that maximizes profit does not maximize happiness, unless happiness is defined as profit.

Stealing someone else's partner is considered to be a very bad practice. It is also existentially risky.

But this ignores actually doing the math! Suppose it is known that she would prefer abcd_z's company to the other fellow's, and abcd_z would prefer her company to no one's, and the other fellow would prefer her company to no one's, but his preference is smaller than theirs. The "stealing other people's partners is bad" is putting precedence above greatest good.* The claim that it's existentially risky is one that doesn't require utilitarianism; a selfish person is more concerned about those sorts of incentives than a utilitarian.

*"But wait!" you cry. "There are second order effects!" Yes, but it's not at all obvious that they point in the direction of not competing for attention. Consider a club where this injunction is taken seriously, and applied very early- basically, a woman is obligated to go home with either the first man she shows positive attention or with no one. Then this dramatically raises the bar for flirting, since she needs to be fairly confident the guy she's interacting with is her best option, but she needs to make that decision at first sight! In a situation where it's alright to compete at all stages of the process, then moving forward with one person has less opportunity cost, leading to more efficient pairings and means of discovering them.

But this ignores actually doing the math! Suppose it is known that she would prefer abcd_z's company to the other fellow's, and abcd_z would prefer her company to no one's, and the other fellow would prefer her company to no one's, but his preference is smaller than theirs. The "stealing other people's partners is bad" is putting precedence above greatest good.* The claim that it's existentially risky is one that doesn't require utilitarianism; a selfish person is more concerned about those sorts of incentives than a utilitarian.

I am of the opinion that utilitarianism is wrong wrong wrong, but treating it as a moral decision procedure is even more wrong. If you're going to be a utilitarian, be a utilitarian at the meta level: think about what moral decision procedure will lead you (given your cognitive and other limitations) to maximize utility in the long run. I think there are many good reasons to believe that doing the math at every decision point will not be the optimal procedure in this sense. Of course, it would be if you were a fully informed, perfectly rational superbeing with infinite willpower and effectively infinite processing speed, but alas, even I cannot yet claim that status.

Given this unfortunate state of affairs, I suspect it is actually a better idea for most utilitarians to commit themselves to a policy like "Don't steal someone else's partner" rather than attempt to do the math every time they are faced with the decision. Of course, there may still be times when its just blindingly obvious that the math is in favor of stealing, in which case screw the policy.

Given this unfortunate state of affairs, I suspect it is actually a better idea for most utilitarians to commit themselves to a policy like "Don't steal someone else's partner" rather than attempt to do the math every time they are faced with the decision.

See the paragraph that follows on second order effects. In the context of flirting with people in clubs, rather than attempting to break up established relationships, the policy of "don't interrupt someone else's flirting" is probably suboptimal.

(Did you not think that paragraph explained the point? Should I have put the asterisk up higher? I'm confused why you made this objection to what you did, when a sibling comment engaged with my discussion of second order effects directly.)

Of course, there may still be times when its just blindingly obvious that the math is in favor of stealing, in which case screw the policy.

The primary reason to have a policy like this is because you trust your offline math more than your online math, in which case if the policy doesn't have a clear escape clause you reasoned through offline, you should trust the policy even when your online math screams that you shouldn't.

Did you not think that paragraph explained the point? Should I have put the asterisk up higher?

There is a much simpler explanation: I completely misunderstood what you meant by "second order effects" and then didn't really read the rest of the footnote because I considered it irrelevant to what I was interested in talking about. How embarrassing. I did admit that I am not yet fully informed and perfectly rational, though.

Thanks for the feedback! I'll be more careful about using that phrase in the future.

Utilitarianism is certainly correct. You can observe this by watching people make decisions under uncertainty. Preferences aren't merely ordinal.

But yes, doing the math has its own utility cost, so many decisions are better off handled with approximations. This is how you get things like the Allais paradox.

I'm not sure what "moral" means here. The goal of a gene is to copy itself. Ethics isn't about altruism.

[-]lmm10

Is this obviously true? Note the underlying assumption that you would break up eventually. Suppose you're dating someone, and the relationship gives you 1 hedon and them 3 hedons. You discover you have another option that gives you 2 hedons, but their next best opportunity gives them 1 hedon. (Suppose, to make things easy, those two other options are involved in a relationship themselves, which gives each of them 1 hedon.)

I don't think that's the scenario we're talking about here. Breaking up because you found someone better is (in observed folk morality) a dick move. But if you're less happy in your relationship than your long-term background level, the correct move is to break up now, even though that will push you both even lower in the short term.

That does still leave the question of what you should do in a relationship where you're mildly less happy than background (and note that long-term background is the correct comparison, not singledom) but your partner is substantially happier (and a future partner of you would not be so happy). The cheap answer is that empirically this seems vanishingly rare; if one partner is unhappy in a relationship, the other tends to also be, or at least to become so pretty soon.

But yes, you can in theory reach situations where you're obliged to reduce your own happiness for the sake of others. The moral obligation of westerners to give up almost all their wealth and donate it to charity is perhaps a simpler and clearer example.

You're pretty sure that you will be less happy in this relationship than other options you have, but think that they will pose a serious risk to themselves if you break up with them. To what degree can they manufacture an obligation for you to provide emotional support to them?

Ultimately, to the degree that you value their life in comparison to yours.

Consider a club where this injunction is taken seriously, and applied very early- basically, a woman is obligated to go home with either the first man she shows positive attention or with no one. Then this dramatically raises the bar for flirting, since she needs to be fairly confident the guy she's interacting with is her best option, but she needs to make that decision at first sight! In a situation where it's alright to compete at all stages of the process, then moving forward with one person has less opportunity cost, leading to more efficient pairings and means of discovering them.

Strawman. Consider the solution that actually happens, more or less: people flirt as much as they need to to make a decision, then go home together or move on; you are allowed to engage new people who've previously tried flirting with others and had it not work out, but not to start making moves on someone who's currently still engaged in pair-flirting.

No-one goes home with someone they don't want to, and many people find partners; most people get to make a reasonable number of attempts, and can flirt more comfortably knowing that for the moment they have their flirtee's full attention. The end pairings are "less efficient" than in a pure-competition situation, but what does that actually mean? Answer: more lower-status people find pairings (over iterated rounds), rather than all the action going to the top 80% all the time. Given diminishing returns this is probably closer to overall optimal.

Ultimately, to the degree that you value their life in comparison to yours.

This is the selfish answer, not the utilitarian answer. (I think the selfish answer is stable.) Note, though, that you have some choice over how much you value their life- someone who strongly values autonomy might decide to get angry at being held hostage to reduce the amount they value the other person, making it easier for them to break up with them.

But if you're less happy in your relationship than your long-term background level, the correct move is to break up now, even though that will push you both even lower in the short term.

Why doesn't the long-term background level depend on the other relationship options you have? As you say later, the correct comparison is not singledom!

It does depend on your other options, but people are very bad at estimating the value of their other options. They idealize potential partners they don't know well, they overestimate others' interest in them ("She looked at me for slightly longer than she looked at him! That must mean something!"), and so on. An "outside view" estimate based on either (a) what it was like to be single or (b) your previous long-term background including past relationships is less vulnerable to bias, and likely to be more accurate.

Breaking up in order to enter a different relationship is also socially undesirable. For instance, people will see the new partner as having "broken up" your old relationship (noted upthread to be a dick move), which will damage their social status.

I would consider a significant portion of existing managers to be exceptionally ineffective. Rather, if there were a majority that were effective, we'd be approaching Singularity significantly faster. The management case specified that the employee's heart was in the in the right place, so I must assume they aren't slacking off at the very least.

Suicidal tendencies, who would be happier with who, plenty of fish in the sea, and so on are examples of why dating is so ridiculously complicated. My own personal conclusion is that I am better off single for reasons you've described well thus far. I haven't done all the math personally, but it is my understanding that society naturally evolves the most stables practices; things do not become taboo for no reason, nor are taboo practices outright forbidden. The few that practice the taboo tend to reinforce the wisdom behind tabooing the practice. It seems like a significantly Bayesian-like process to me.

it is my understanding that society naturally evolves the most stables practices

"Stable" in game theory generally means that no party can make themselves better off unilaterally. (Consider, for example, the stable marriage problem.) Utilitarianism is the correct descriptive observation that socially maximal situations might not be locally maximal, and the dubious prescriptive claim that agents should go for the globally maximal situation because everyone's preferences are equally valuable.

That last one was meant as in "he's a player that just met her and so am I."

It is considerably less taboo then, but it still seems risky to me.