This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, "it feels good" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.
I'm sorry, but this cannot possibly explain the akrasia I have experienced. Living a purposefully hedonistic life is widely considered low-status, so most people do not admit to their consciously hedonistic goals. Thus, the goals we hear about akrasia preventing people from pursuing are all noble, selfless goals: "I would like to do this thing that provides me utility but not hedonistic pleasure, but that damned akrasia is stopping me." With that as your only evidence, it is not unreasonable that you should c...
This discussion has made me feel I don't understand what "utilon" really means. Hedons are easy: clearly happiness and pleasure exist, so we can try to measure them. But what are utilons?
"Whatever we maximize"? But we're not rational, quite inefficient, and whatever we actually maximize as we are today probably includes a lot of pain and failures and isn't something we consciously want.
"Whatever we self-report as maximizing"? Most of the time this is very different from what we actually try to maximize in practice, because self-reporting is signaling. And for a lot of people it includes plans or goals that, when achieved, are likely (or even intended) to change their top-level goals drastically.
"If we are asked to choose between two futures, and we prefer one, that one is said to be of higher utility." That's a definition, yes, but it doesn't really prove that the collection-of-preferred-universes can be described any more easily than the real decision function of which utilons are supposed to be a simplification. For instance, what if by minor and apparently irrelevant changes in the present, I can heavily influence all of people's pre
Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal.
This example is hardly hypothetical. According to GiveWell, you can save the life of one African person for $200 - $1000.
$200-$1,000 per life saved
Almost everyone has spent $5000 on things that they didn't need - for example a new car as opposed to a second hand one, a refurbishment of a room in the house, a family holiday. $5000 comes nowhere close to "doubling your hedons" - in fact it probably hardly makes a dent. Furthermore, almost everyone is aware of this fact, but we conveniently don't pay any attention to it, and our subconscious minds don't remind us about it because the deaths in Africa are remote and impersonal.
Since I know of very few people who spend literally all their spare money on saving lives at $1000 per life, and almost everyone would honestly claim that they would pay $200 - 1000 to save someone from a painful death, it is fair to say that people pretty universally don't maximize "utilons".
I suspect we already indirectly, incrementally cause the death of unknown persons in order to accumulate personal wealth and pleasure. Consider goods produced in factories causing air and water contamination affecting incumbent farmers. While I'd like to punish those goods' producers by buying alternatives, it's apparently not worth my time*.
Probably, faced with the requirement to directly and completely cause a death, we would feel wrong enough about this (even with a promise of memory-wipe) to desist. But I find it difficult to consider such a situat...
Nice post! This distinction should clear up several confusions. Incidentally, I don't know if there's a word for the opposite of a utilon, but the antonym of "hedon" is "dolor".
The card drawing paradox is isomorphic to the old paradox of the game where you double your money each time a coin comes up heads (the paradox being that simplistic theory assigns infinite value to both games). The solution is the same in each case: first, the entity underwriting the game cannot pay out infinite resources, and second, your utility function is not infinitely scalable in whatever resource is being paid.
I have the sense that much of this was written as a response to this paradox in which maximizing expected utility tells you to draw cards until you die.
Psychohistorian wrote:
There's a bigger problem causing that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy.
The paradox is stated in utilons, no...
"Lots of people who want to will get really, really high" is only very rarely touted as a major argument.
In public policy discussions, that's true. In private conversations with individuals, I've heard that reason more than any other.
Depending on your purpose, I think it's probably useful to distinguish between self-regarding and other-regarding utilons as well. A consequentialist moral theory may want to maximise the (weighted) sum of (some transform of) self-regarding utilons, but to exclude other-regarding utilons from the maximand (to avoid "double-counting").
The other interesting question is: what does it actually mean to "value" something?
In what way are hedons anything other than a subset of utilons? Please clarify.
Increasing happiness is a part of human utility, it just isn't all of it. This post doesn't really make sense because it is arguing Superset vs Subset.
Re: I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units.
This seems contrary to the usage of the LessWrong Wiki:
http://wiki.lesswrong.com/wiki/Utilon
http://wiki.lesswrong.com/wiki/Hedon
The Wiki has the better usage - much better usage.
I'm not convinced by your examples that people generally value utilons over hedons.
For your first example, you feel like you (and others, by generalization) would reject Omega's deal, but how much can you trust this self-prediction? Especially given that this situation will never occur, you don't have much incentive to predict correctly if the answer isn't flattering.
For the drug use example, I can think of many other possible reasons that people would oppose drugs other than valuing utilons over hedons. Society might be split into two groups: drug-lovers...
"dead in an hour with P=~1-1.88*10^165" should probably have 10^(-165) so that P is just less than 1.
Why doesn't this post show up under "new" anymore?
[And what possible reason did someone have for down-voting that question?]
Related to: Would Your Real Preferences Please Stand Up?
I have to admit, there are a lot of people I don't care about. Comfortably over six billion, I would bet. It's not that I'm a callous person; I simply don't know that many people, and even if I did I hardly have time to process that much information. Every day hundreds of millions of incredibly wonderful and terrible things happen to people out there, and if they didn't, I wouldn't even know it.
On the other hand, my professional goals deal with economics, policy, and improving decision making for the purpose of making millions of people I'll never meet happier. Their happiness does not affect my experience of life one bit, but I believe it's a good thing and I plan to work hard to figure out how to create more happiness.
This underscores an essential distinction in understanding any utilitarian viewpoint: the difference between experience and values. One can value unweighted total utility. One cannot experience unweighted total utility. It will always hurt more if a friend or loved one dies than if someone you never knew in a place you never heard of dies. I would be truly amazed to meet someone who is an exception to this rule and is not an absolute stoic. Your experiential utility function may have coefficients for other people's happiness (or at least your perception of such), but there's no way it has an identical coefficient for everyone everywhere, unless that coefficient is zero. On the other hand, you probably care in an abstract way about whether people you don't know die or are enslaved or imprisoned, and may even contribute some money or effort to prevent such from happening. I'm going to use "utilons" to refer to value utility units and "hedons" to refer to experiential utility units; I'll demonstrate that this is a meaningful distinction shortly, and that we value utilons over hedons explains much of our moral reasoning appearing to fail.
Let's try a hypothetical to illustrate the difference between experiential and value utility. An employee of Omega, LLC,1 offers you a deal to absolutely double your hedons but kill five people in, say, rural China, then wipe your memory of the deal. Do you take it? What about five hundred? Five hundred thousand?
I can't speak for you, so I'll go through my evaluation of this deal and hope it generalizes reasonably well. I don't take it at any of these values. There's no clear hedonistic explanation for this - after all, I forget it happened. It would be absurd to say that the disutility I experience between entering the agreement and having my memory wiped is so tremendous as to outweigh everything I will experience for the rest of my life (particularly since I immediately forget this disutility), and this is the only way I can see my rejection could be explained with hedons. In fact, even if the memory wipe weren't part of the deal, I doubt the act of having a few people killed would really cause me more displeasure than doubling my future hedons would yield; I'd bet more than five people have died in rural China as I've written this post, and it hasn't upset me in the slightest.
The reason I don't take the deal is my values; I believe it's wrong to kill random people to improve my own happiness. If I knew that they were people who sincerely wanted to be dead or that they were, say, serial killers, my decision would be different, even though my hedonic experience would probably not be that different. If I knew that millions of people in China would be significantly happier as a result, as well, then there's a good chance I'd take the deal even if it didn't help me. I seem to be maximizing utilons and not hedons, and I think most people would do the same.
Also, as another example so obvious that I feel like it's cheating, if most people read the headline "100 workers die in Beijing factory fire" or "1000 workers die in Beijing factory fire," they will not feel ten times the hedonic blow, even if they live in Beijing. That it is ten times worse is measured in our values, not our experiences; these values are correct, since there are roughly ten times as many people who have seriously suffered from the fire, but if we're talking about people's hedons, no individual suffers ten times as much.
In general, people value utilons much more than hedons. Drugs being illegal are an illustration of this. Arguments for (and against) drug legalization are an even better illustration of this. Such arguments usually involve weakening organized crime, increasing safety, reducing criminal behaviour, reducing expenditures on prisons, improving treatment for addicts, and improving similar values. "Lots of people who want to will get really, really high" is only very rarely touted as a major argument, even though the net hedonic value of drug legalization would probably be massive (just as the hedonic cost of prohibition in the 20's was clearly massive).
As a practical matter, this is important because many people do things precisely because they are important in their abstract value system, even if they result in little or no hedonic payoff. This, I believe, is an excellent explanation of why success is no guarantee of happiness; success is conducive to getting hedons, but it also tends to cost a lot of hedons, so there is little guarantee that earned wealth will be a net positive (and, with anchoring, hedons will get a lot more expensive than they are for the less successful). On the other hand, earning wealth (or status) is a very common value, so people pursue it irrespective of its hedonistic payoff.
It may be convenient to argue that the hedonistic payoffs must balance out, but this does not make it the case in reality. Some people work hard on assignments that are practically meaningless to their long-term happiness because they believe they should, not because they have any delusions about their hedonistic payoff. To say, "If you did X instead of Y because you 'value' X, then the hedonistic cost of breaking your values must exceed Y-X," is to win an argument by definition; you have to actually figure out the values and see if that's true. If it's not, then I'm not a hedon-maximizer. You can't then assert that I'm an "irrational" hedon maximizer unless you can make some very clear distinction between "irrationally maximizing hedons" and "maximizing something other than hedons."
This dichotomy also describes akrasia fairly well, though I'd hesitate to say it truly explains it. Akrasia is what happens when we maximize our hedons at the expense of our utilons. We play video games/watch TV/post on blogs because it feels good, and we feel bad about it because, first, "it feels good" is not recognized as a major positive value in most of our utilon-functions, and second, because doing our homework is recognized as a major positive value in our utilon functions. The experience makes us procrastinate and our values make us feel guilty about it. Just as we should not needlessly multiply causes, neither should we erroneously merge them.
Furthermore, this may cause our intuition to seriously interfere with utility-based hypotheticals, such as these. Basically, you choose to draw cards, one at a time, that have a 10% chance of killing you and a 90% chance of doubling your utility. Logically, if your current utility is positive and you assign a utility of zero2 (or greater) to your death (which makes sense in hedons, but not necessarily in utilons), you should draw cards until you die. The problem of course being that if you draw a card a second, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.
There's a bigger problem that causes our intuition to reject this hypothetical as "just wrong:" it leads to major errors in both utilons and hedons. The mind cannot comprehend unlimited doubling of hedons. I doubt you can imagine being 260 times as happy as you are now; indeed, I doubt it is meaningfully possible to be so happy. As for utilons, most people assign a much greater value to "not dying," compared with having more hedons. Thus, a hedonic reading of the problem returns an error because repeated doubling feels meaningless, and a utilon reading (may) return an error if we assign a significant enough negative value to death. But if we look at it purely in terms of numbers, we end up very, very happy right up until we end up very, very dead.
Any useful utilitarian calculus need take into account that hedonic utility is, for most people, incomplete. Value utility is often a major motivating factor, and it need not translate perfectly into hedonic terms. Incorporating value utility seems necessary to have a map of human happiness that actually matches the territory. It also may be good that it can be easier to change values than it is to change hedonic experiences. But assuming people maximize hedons, and then assuming quantitative values that conform to this assumption, proves nothing about what actually motivates people and risks serious systematic error in furthering human happiness.
We know that our experiential utility cannot encompass all that really matters to us, so we have a value system that we place above it precisely to avoid risking destroying the whole world to make ourselves marginally happier, or to avoid pursuing any other means of gaining happiness that carries tremendous potential expense.
1- Apparently Omega has started a firm due to excessive demand for its services, or to avoid having to talk to me.
2- This assumption is rather problematic, though zero seems to be the only correct value of death in hedons. But imagine that you just won the lottery (without buying a ticket, presumably) and got selected as the most important, intelligent, attractive person in whatever field or social circle you care most about. How bad would it be to drop dead? Now, imagine you just got captured by some psychopath and are going to be tortured for years until you eventually die. How bad would it be to drop dead? Assigning zero (or the same value, period) to both outcomes seems wrong. I realize that you can say that death in one is negative and in the other is positive relative to expected utility, but still, the value of death does not seem identical, so I'm suspicious of assigning it the same value in both cases. I realize this is hand-wavy; I think I'd need a separate post to address this issue properly.