## VNM expected utility theory: uses, abuses, and interpretation

20 17 April 2010 08:23PM

When interpreted convservatively, the von Neumann-Morgenstern rationality axioms and utility theorem are an indispensible tool for the normative study of rationality, deserving of many thought experiments and attentive decision theory.  It's one more reason I'm glad to be born after the 1940s. Yet there is apprehension about its validity, aside from merely confusing it with Bentham utilitarianism (as highlighted by Matt Simpson).  I want to describe not only what VNM utility is really meant for, but a contextual reinterpretation of its meaning, so that it may hopefully be used more frequently, confidently, and appropriately.

### 1.  Preliminary discussion and precautions

The idea of John von Neumann and Oskar Mogernstern is that, if you behave a certain way, then it turns out you're maximizing the expected value of a particular function.  Very cool!  And their description of "a certain way" is very compelling: a list of four, reasonable-seeming axioms.  If you haven't already, check out the Von Neumann-Morgenstern utility theorem, a mathematical result which makes their claim rigorous, and true.

VNM utility is a decision utility, in that it aims to characterize the decision-making of a rational agent.  One great feature is that it implicitly accounts for risk aversion: not risking \$100 for a 10% chance to win \$1000 and 90% chance to win \$0 just means that for you, utility(\$100) > 10%utility(\$1000) + 90%utility(\$0).

But as the Wikipedia article explains nicely, VNM utility is:

1. not designed to predict the behavior of "irrational" individuals (like real people in a real economy);
2. not designed to characterize well-being, but to characterize decisions;
3. not designed to measure the value of items, but the value of outcomes;
4. only defined up to a scalar multiple and additive constant (acting with utility function U(X) is the same as acting with a·U(X)+b, if a>0);
5. not designed to be added up or compared between a number of individuals;
6. not something that can be "sacrificed" in favor of others in a meaningful way.

[ETA]  Additionally, in the VNM theorem the probabilities are understood to be known to the agent as they are presented, and to come from a source of randomness whose outcomes are not significant to the agent.  Without these assumptions, its proof doesn't work.

Because of (4), one often considers marginal utilities of the form U(X)-U(Y), to cancel the ambiguity in the additive constant b.  This is totally legitimate, and faithful to the mathematical conception of VNM utility.

Because of (5), people often "normalize" VNM utility to eliminate ambiguity in both constants, so that utilities are unique numbers that can be added accross multiple agents.  One way is to declare that every person in some situation values \$1 at 1 utilon (a fictional unit of measure of utility), and \$0 at 0.  I think a more meaningful and applicable normalization is to fix mean and variance with respect to certain outcomes (next section).

Because of (6), characterizing the altruism of a VNM-rational agent by how he sacrifices his own VNM utility is the wrong approach.  Indeed, such a sacrifice is a contradiction.  Kahneman suggests1, and I agree, that something else should be added or substracted to determine the total, comparative, or average well-being of individuals.  I'd call it "welfare", to avoid confusing it with VNM utility.  Kahneman calls it E-utility, for "experienced utility", a connotation I'll avoid.  Intuitively, this is certainly something you could sacrifice for others, or have more of compared to others.  True, a given person's VNM utility is likely highly correlated with her personal "welfare", but I wouldn't consider it an accurate approximation.

So if not collective welfare, then what could cross-agent comparisons or sums of VNM utilities indicate?  Well, they're meant to characterize decisions, so one meaningful application is to collective decision-making:

## Average utilitarianism must be correct?

5 06 April 2009 05:10PM

I said this in a comment on Real-life entropic weirdness, but it's getting off-topic there, so I'm posting it here.

My original writeup was confusing, because I used some non-standard terminology, and because I wasn't familiar with the crucial theorem.  We cleared up the terminological confusion (thanks esp. to conchis and Vladimir Nesov), but the question remains.  I rewrote the title yet again, and have here a restatement that I hope is clearer.

• We have a utility function u(outcome) that gives a utility for one possible outcome.  (Note the word utility.  That means your diminishing marginal utility, and all your preferences, and your aggregation function for a single outcome, are already incorporated into this function.  There is no need to analyze u further, as long as we agree on using a utility function.)
• We have a utility function U(lottery) that gives a utility for a probability distribution over all possible outcomes.
• The von Neumann-Morgenstern theorem indicates that, given 4 reasonable axioms about U, the only reasonable form for U is to calculate the expected value of u(outcome) over all possible outcomes.  This is why we constantly talk on LW about rationality as maximizing expected utility.
• This means that your utility function U is indifferent with regard to whether the distribution of utility is equitable among your future selves.  Giving one future self u=10 and another u=0 is equally as good as giving one u=5 and another u=5.
• This is the same ethical judgement that an average utilitarian makes when they say that, to calculate social good, we should calculate the average utility of the population; modulo the problems that population can change and that not all people are equal.  This is clearer if you use a many-worlds interpretation, and think of maximizing expected value over possible futures as applying average utilitarianism to the population of all possible future yous.
• Therefore, I think that, if the 4 axioms are valid when calculating U(lottery), they are probably also valid when calculating not our private utility, but a social utility function s(outcome), which sums over people in a similar way to how U(lottery) sums over possible worlds.  The theorem then shows that we should set s(outcome) = the average value of all of the utilities for the different people involved. (In other words, average utilitarianism is correct).  Either that, or the axioms are inappropriate for both U and s, and we should not define rationality as maximizing expected utility.
• (I am not saying that the theorem reaches down through U to say anything directly about the form of u(outcome).  I am saying that choosing a shape for U(lottery) is the same type of ethical decision as choosing a shape for s(outcome); and the theorem tells us what U(lottery) should look like; and if that ethical decision is right for U(lottery), it should also be right for s(outcome). )
• And yet, average utilitarianism asserts that equity of utility, even among equals, has no utility.  This is shocking, especially to Americans.
• It is even more shocking that it is thus possible to prove, given reasonable assumptions, which type of utilitarianism is correct.  One then wonders what other seemingly arbitrary ethical valuations actually have provable answers given reasonable assumptions.

Some problems with average utilitarianism from the Stanford Encyclopedia of Philosophy:

Despite these advantages, average utilitarianism has not obtained much acceptance in the philosophical literature. This is due to the fact that the principle has implications generally regarded as highly counterintuitive. For instance, the principle implies that for any population consisting of very good lives there is a better population consisting of just one person leading a life at a slightly higher level of well-being (Parfit 1984 chapter 19). More dramatically, the principle also implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984). That total well-being should not matter when we are considering lives worth ending is hard to accept. Moreover, average utilitarianism has implications very similar to the Repugnant Conclusion (see Sikora 1975; Anglin 1977).

(If you assign different weights to the utilities of different people, we could probably get the same result by considering a person with weight W to be equivalent to W copies of a person with weight 1.)