Related to: Would Your Real Preferences Please Stand Up?
I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.
On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.
This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.
Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal. Do you take it? What about five hundred? Five hundred thousand?
I can't speak for you, so I'll go through my evaluation of this deal and hope it generalizes reasonably well. I don't take it at any of these values. There's no clear hedonistic explanation for this - after all, I forget it happened. It would be absurd to say that the disutility I experience between entering the agreement and having my memory wiped is so tremendous as to outweigh everything I will experience for the rest of my life (particularly since I immediately forget this disutility), and this is the only way I can see my rejection could be explained with hedons. In fact, even if the memory wipe weren't part of the deal, I doubt the act of having a few people killed would really cause me more displeasure than doubling my future hedons would yield; I'd bet more than five people have died in rural China as I've written this post, and it hasn't upset me in the slightest.
The reason I don't take the deal is my values; I believe it's wrong to kill random people to improve my own happiness. If I knew that they were people who sincerely wanted to be dead or that they were, say, serial killers, my decision would be different, even though my hedonic experience would probably not be that different. If I knew that millions of people in China would be significantly happier as a result, as well, then there's a good chance I'd take the deal even if it didn't help me. I seem to be maximizing utilons and not hedons, and I think most people would do the same.
Also, as another example so obvious that I feel like it's cheating, if most people read the headline "100 workers die in Beijing factory fire" or "1000 workers die in Beijing factory fire," they will not feel ten times the hedonic blow, even if they live in Beijing. That it is ten times worse is measured in our values, not our experiences; these values are correct, since there are roughly ten times as many people who have seriously suffered from the fire, but if we're talking about people's hedons, no individual suffers ten times as much.
In general, people value utilons much more than hedons. Drugs being illegal are an illustration of this. Arguments for (and against) drug legalization are an even better illustration of this. Such arguments usually involve weakening organized crime, increasing safety, reducing criminal behaviour, reducing expenditures on prisons, improving treatment for addicts, and improving similar values. "Lots of people who want to will get really, really high" is only very rarely touted as a major argument, even though the net hedonic value of drug legalization would probably be massive (just as the hedonic cost of prohibition in the 20's was clearly massive).
As a practical matter, this is important because many people do things precisely because they are important in their abstract value system, even if they result in little or no hedonic payoff. This, I believe, is an excellent explanation of why success is no guarantee of happiness; success is conducive to getting hedons, but it also tends to cost a lot of hedons, so there is little guarantee that earned wealth will be a net positive (and, with anchoring, hedons will get a lot more expensive than they are for the less successful). On the other hand, earning wealth (or status) is a very common value, so people pursue it irrespective of its hedonistic payoff.
It may be convenient to argue that the hedonistic payoffs must balance out, but this does not make it the case in reality. Some people work hard on assignments that are practically meaningless to their long-term happiness because they believe they should, not because they have any delusions about their hedonistic payoff. To say, "If you did X instead of Y because you 'value' X, then the hedonistic cost of breaking your values must exceed Y-X," is to win an argument by definition; you have to actually figure out the values and see if that's true. If it's not, then I'm not a hedon-maximizer. You can't then assert that I'm an "irrational" hedon maximizer unless you can make some very clear distinction between "irrationally maximizing hedons" and "maximizing something other than hedons."
This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, "it feels good" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.
Furthermore, this may cause our intuition to seriously interfere with utility-based hypotheticals, such as these. Basically, you choose to draw cards, one at a time, that have a 10% chance of killing you and a 90% chance of doubling your utility. Logically, if your current utility is positive and you assign a utility of zero2 (or greater) to your death (which makes sense in hedons, but not necessarily in utilons), you should draw cards until you die. The problem of course being that if you draw a card a second, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.
There's a bigger problem that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy. As for utilons, most people assign a much greater value to "not dying," compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.
Any useful utilitarian calculus need take into account that hedonic utility is, for most people, incomplete. Value utility is often a major motivating factor, and it need not translate perfectly into hedonic terms. Incorporating value utility seems necessary to have a map of human happiness that actually matches the territory. It also may be good that it can be easier to change values than it is to change hedonic experiences. But assuming people maximize hedons, and then assuming quantitative values that conform to this assumption, proves nothing about what actually motivates people and risks serious systematic error in furthering human happiness.
We know that our experiential utility cannot encompass all that really matters to us, so we have a value system that we place above it precisely to avoid risking destroying the whole world to make ourselves marginally happier, or to avoid pursuing any other means of gaining happiness that carries tremendous potential expense.
1- Apparently Omega has started a firm due to excessive demand for its services, or to avoid having to talk to me.
2- This assumption is rather problematic, though zero seems to be the only correct value of death in hedons. But imagine that you just won the lottery (without buying a ticket, presumably) and got selected as the most important, intelligent, attractive person in whatever field or social circle you care most about. How bad would it be to drop dead? Now, imagine you just got captured by some psychopath and are going to be tortured for years until you eventually die. How bad would it be to drop dead? Assigning zero (or the same value, period) to both outcomes seems wrong. I realize that you can say that death in one is negative and in the other is positive relative to expected utility, but still, the value of death does not seem identical, so I'm suspicious of assigning it the same value in both cases. I realize this is hand-wavy; I think I'd need a separate post to address this issue properly.
One dollar is the approximate cost if the right treatment is in the right place at the right time. How much does it cost to get the right treatment to the right place at the right time?
The price of the salt pill itself is only a few pennies. The one dollar figure was meant to include overhead. That said, the Copenhagen report mentioned above ($64 per death averted) looks more credible. But during a particular crisis the number could be less.