mattnewport comments on The Trouble With "Good" - Less Wrong

83 Post author: Yvain 17 April 2009 02:07AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (131)

You are viewing a single comment's thread.

Comment author: mattnewport 17 April 2009 07:54:36AM 0 points [-]

Is the usual definition of utilitarianism taken to weight the outcomes for all people equally? While utilitarian arguments often lead to conclusions I agree with, I can't endorse a moral system that seems to say I should be indifferent to a choice between my sister being shot and a serial killer being shot. Is there a standard utilitarian position on such dilemmas?

Comment author: gjm 17 April 2009 09:04:29AM 4 points [-]

I fear you may be thinking "serial killer: karma -937; my sister: karma +2764".

A utilitarian would say: consider what that person is likely to do in the future. The serial killer might murder dozens more people, or might get caught and rot in jail. Your sister will most likely do neither. And consider how other people will feel about the deaths. The serial killer is likely to have more enemies, fewer friends, fewer close friends. So the next utility change from shooting the serial killer is much less negative (or even more positive) than from shooting your sister, and you need not (should not) be indifferent between those.

In general, utilitarianism gets results that resemble those of intuitive morality, but it tends to get them indirectly. Or perhaps it would be better to say: Intuitive morality gets results that resemble those of utilitarianism, but it gets them via short-cuts and heuristics, so that things that tend to do badly in utilitarian terms feel like they're labelled "bad".

Comment author: mattnewport 17 April 2009 05:46:12PM 5 points [-]

In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?

Comment author: gjm 17 April 2009 06:51:13PM 2 points [-]

It's certainly possible in principle that it might end up that way. A utilitarian would say: Our moral intuitions are formed by our experience of "normal" situations; in situations as weirdly abnormal as you'd need to make utilitarianism favour saving the serial killer at the expense of an ordinary upright citizen, or to make slavery a good thing overall, or whatever, we shouldn't trust our intuition.

Comment author: mattnewport 17 April 2009 07:49:35PM 0 points [-]

And this is the crux of my problem with utilitarianism I guess. I just don't see any good reason to prefer it over my intuition when the two are in conflict.

Comment author: randallsquared 17 April 2009 09:27:19PM 1 point [-]

Even though your intuition might be wrong in outlying cases, it's still a better use of your resources not to think through every case, so I'd agree that using your intuition is better than using reasoned utilitarianism for most decisions for most people.

It's better to strictly adhere to an almost-right moral system than to spend significant resources on working out arbitrarily-close-to-right moral solutions, for sufficiently high values of "almost-right", in other words. In addition to the inherent efficiency benefit, this will make you more predictable to others, lowering your transaction costs in interactions with them.

Comment author: mattnewport 17 April 2009 09:35:53PM 0 points [-]

My problem is a bit more fundamental than that. If the premise of utilitarianism is that it is morally/ethically right for me to provide equal weighting to all people's utility in my own utility function then I dispute the premise, not the procedure for working out the correct thing to do given the premise. The fact that utilitarianism can lead to moral/ethical decisions that conflict with my intuitions seems to me a reason to question the premises of utilitarianism rather than to question my intuitions.

Comment author: Virge 18 April 2009 04:30:05AM 3 points [-]

Your intuitions will be biased to favoring a sibling over a stranger. Evolution has seen to that, i.e. kin selection.

Utilitarianism tries to maximize utility for all, regardless of relatedness. Even if you adjust the weightings for individuals based on likelihood of particular individuals having a greater impact on overall utility, you don't (in general) get weightings that will match your intuitions.

I think it is unreasonable to expect your moral intuitions to ever approximate utilitarianism (or vice versa) unless you are making moral decisions about people you don't know at all.

In reality, the money I spend on my two cats could be spent improving the happiness of many humans - humans that I don't know at all who are living a long way away from me. Clearly I don't apply utilitarianism to my moral decision to keep pets. I am still confused about how much I should let utilitarianism shift my emotionally-based lifestyle decisions.

Comment author: Matt_Simpson 18 April 2009 04:43:14AM 0 points [-]

I think you are construing the term "utilitarianism" too narrowly. The only reason you should be a utilitarian is if you intrinsically value the utility functions of other people. However, you don't have to value the entire thing for the label to be appropriate. You still care about a large part of that murderer's utility function, I assume, as well as that of non-murderers. Not classical utilitarianism, but the term still seems appropriate.

Comment author: mattnewport 18 April 2009 05:26:07AM *  0 points [-]

Utilitarianism seems a fairly unuseful ethical system if the utility function is subjective, either because individuals get to pick and choose which parts of others' utility functions to respect or because individuals are allowed to choose subjective weights for others' utilities. It would seem to degenerate into an impractical-to-implement system for everybody just justifying what they feel like doing anyway.

Comment author: Matt_Simpson 18 April 2009 05:43:55AM 0 points [-]

Well, assuming you get to make up your own utility function, yes. However, I don't think this is the case. It seems more likely that we or born with utility functions or, rather, something we can construct a coherent utility function out of. Given the psychological unity of mankind, there is likely to be a lot of similarities in these utility functions across the species.

Comment author: Kingreaper 26 November 2010 04:25:04PM 1 point [-]

Yes. But if the "serial killer" is actually somone who enjoys helping others, who want to (and won't harm anyone when they), commit suicide; are they really a bad person at all?

Is shooting them really better than shooting a random person?

Comment author: SoullessAutomaton 17 April 2009 05:49:17PM 1 point [-]

In a least convenient possible world, where the serial killer really enjoys killing people and only kills people who have no friends and family and won't be missed and are quite depressed, would it ever be conceivable that utilitarianism would imply indifference to the choice?

Also, would the verdict on this question change if the people he killed had attempted but failed at suicide, or wanted to suicide but lacked the willpower to?

Comment author: Kaj_Sotala 17 April 2009 10:46:37AM 0 points [-]

There isn't a standard utilitarian position on such dilemmas, because there is no such thing as standard utilitarianism. Utiliarianism is a meta-ethical system, not an ethical system. It specifies the general framework by which you think about morality, but not the details.

There are plenty of variations of utilitarianism - negative or positive utilitarianism, average or total utiliarianism, and so on. And there is nothing to prevent you from specifying that, in your utility function, your family members are treated preferrentially to everybody else.

Comment author: steven0461 17 April 2009 03:32:51PM *  1 point [-]

Utilitarianism is an incompletely specified ethical (not meta-ethical) system, but part of what it does specify is that everyone gets equal weight. If you're treating your family members preferentially, you may be maximizing your utility, but you're not following "utilitarianism" in that word's standard meaning.

Comment author: ciphergoth 17 April 2009 03:48:09PM 2 points [-]

The SEP agrees with you:

[...] classic utilitarianism is actually a complex combination of many distinct claims, including the following claims about the moral rightness of acts:

[...] Equal Consideration = in determining moral rightness, benefits to one person matter just as much as similar benefits to any other person (= all who count count equally).

Comment author: MBlume 17 April 2009 04:01:49PM *  2 points [-]

The SEP

For just a moment I was thinking "How is the Somebody Else's Problem field involved?"

Comment author: conchis 17 April 2009 04:16:16PM *  2 points [-]

I'd put a slight gloss on this.

The problem is that that "utilitarianism", as used in much of the literature, does seem to have more than one standard meaning. In the narrow (classical) utilitarian sense, steven0461 and the SEP are absolutely right to insist that it imposes equal weights. However, there's definitely a literature that uses the term in a more general sense, which includes weighted utilitarianism as a possibility. Contra Kaj, however, even this sense does seem to exclude agent-relative weights.

As much of this literature is in economics, perhaps it's non-standard in philosophy. It does, however, have a fairly long pedigree.

Comment author: Kaj_Sotala 17 April 2009 08:36:29PM 0 points [-]

Contra Kaj, however, even this sense does seem to exclude agent-relative weights.

Utilitarianism that includes animals vs. utilitarianism that doesn't include animals. If some people can give more / less weight to a somewhat arbitrarily defined group of subjects (animals), it doesn't seem much of a stretch to also allow some people to weight another arbitrarily chosen group (family members) more (or less).

Classical utilitarianism is more strictly defined, but as you point out, we're not talking about just classical utilitarianism here.

Comment author: conchis 17 April 2009 09:09:32PM *  1 point [-]

I don't think that's a very good example of agent-relativity. Those who would argue that only humans matter seldom (if ever) do so on the basis of agent-relative concerns: it's not that I am supposed to have a special obligation to humans because I'm human; it's that only humans are supposed to matter at all.

In any event, the point wasn't that agent relative weights don't make sense, it's that they're not part of a standard definition of utilitarianism, even in a broad sense. I still think that's accurate characterization of professional usage, but if you have specific examples to the contrary, I'd be open to changing my mind.

Gratuitous nitpick: humans are animals too.

Comment author: Kaj_Sotala 18 April 2009 07:46:05AM *  1 point [-]

You may be right. But we're inching pretty close towards arguing by definition now. So to avoid that, let me rephrase my original response to mattnewport's question:

You're right, by most interpretations utilitarianism does weigh everybody equally. However, if that's the only thing in utilitarianism that you disagree with, and like the ethical system otherwise, then go ahead and adopt as your moral system a utilitarianism-derived one that differs from normal utilitarianism only in that you weight your family more than others. It may not be utilitarianism, but why should you care about what your moral system is called?

Comment author: conchis 18 April 2009 02:37:30PM *  1 point [-]

I completely agree with your reframing.

I (mistakenly) thought your original point was a definitional one, and that we had been discussing definitions the entire time. Apologies.

Comment author: Kaj_Sotala 19 April 2009 07:32:22PM 0 points [-]

No problem. It happens.

Comment author: AndySimpson 17 April 2009 03:48:25PM 0 points [-]

In utilitarianism, sometimes some animals can be more equal than others.. It's just that their lives must be of greater utility for some reason. I think sentimental distinctions between people would be rejected by most utilitarians as a reason to consider them more important.

Comment author: Peter_Twieg 17 April 2009 02:40:47PM 0 points [-]

Utilitarianism doesn't describe how you should feel, it simply describes "the good". It's very possible that if accepting utilitarianism's implications is so abhorrent to you that the world would be a worse place because you do it (because you're unhappy, or because embracing utilitarianism might actually make you worse at promoting utility), then by all means... don't endorse it, at least not at some given level you find repugnant. This is what Derek Parfit labels a "self-effacing" philosophy, I believe.

There are a variety of approaches to actually being a practicing utilitarian, however. Obviously we don't have the computational power required to properly deduce every future consequence of our actions, so at a practical level utilitarians will always support heuristics of some sort. One of these heuristics may dictate that you should always prefer serial killers to be shot over your sister for the kinds of reasons that gjm describes. This might not always lead to the right conclusion from a utilitarian perspective, but it probably wouldn't be a blameworthy one, as you did the best you could under incomplete information about the universe.