Would you prefer a 50% chance of gaining €10, one chance in a million off gaining €5 million, or a guaranteed €5? The standard position on Less Wrong is that the answer depends solely on the difference between cash and utility. If your utility scales less-than-linearly with money, you are risk averse and should choose the last option; if it scales more-than-linearly, you are risk-loving and should choose the second one. If we replaced €’s with utils in the example above, then it would simply be irrational to prefer one option over the others.

 

There are mathematical proofs of that result, but there are also strong intuitive arguments for it. What’s the best way of seeing this? Imagine that X1 and X2 were two probability distributions, with mean u1 and u2 and variances v1 and v2. If the two distributions are independent, then the sum X1 + X2 has mean u1 + u2, and variance v1 + v2.

 

Now if we multiply the returns of any distribution by a constant r, the mean scales by r and variance scales by r2. Consequently if we have n probability distributions X1, X2, ... , Xn representing n equally expensive investments, the expected average return is (Σni=1 ui)/n, while the variance of this average is (Σni=1 vi)/n2. If the vn are bounded, then once we make n large enough, that variance must tend to zero. So if you have many investments, your averaged actual returns will be, with high probability, very close to your expected returns.

 

Thus there is no better strategy than to always follow expected utility. There is no such thing as sensible risk-aversion under these conditions, as there is no actual risk: you expect your returns to be your expected returns. Even if you yourself do not have enough investment opportunities to smooth out the uncertainty in this way, you could always aggregate your own money with others, through insurance or index funds, and achieve the same result. Buying a triple-rollover lottery ticket may be unwise; but being part of a consortium that buys up every ticket for a triple rollover lottery is just a dull, safe investment. If you have altruistic preferences, you can even aggregate results across the planet simply by encouraging more people to follow expected returns. So, case closed it seems; departing from expected returns is irrational.

 

But the devil’s detail is the condition ‘once we make n large enough’. Because there are risk distributions so skewed that no-one will ever be confronted with enough of them to reduce the variance to manageable levels. Extreme risks to humanity are an example; killer asteroids, rogue stars going supernova, unfriendly AI, nuclear war: even totalling all these risks together, throwing in a few more exotic ones, and generously adding every single other decision of our existence, we are nowhere near a neat probability distribution tightly bunched around its mean.

 

To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical. In the same way, our decision when faced with a single planet-destroying event should not be constrained by the behaviour of a hypothetical being who confronts such events trillions of times over.

 

So where does this leave us? The independence axiom of the von Neumann-Morgenstern  utility formalism should be ditched, as it implies that large variance distributions are identical to sums of low variance distributions. This axiom should be replaced by a weaker version which reproduces expected utility in the limiting case of many distributions. Since there is no single rational path available, we need to fill the gap with other axioms – values – that reflect our genuine tolerance towards extreme risk. As when we first discovered probability distributions in childhood, we may need to pay attention to medians, modes, variances, skewness, kurtosis or the overall shapes of the distributions. Pascal's mugger and his whole family can be confronted head-on rather than hoping the probabilities neatly cancel out.

 

In these extreme cases, exclusively following the expected value is an arbitrary decision rather than a logical necessity.

 

 

 

 

New to LessWrong?

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 11:29 AM

"To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical."

That's not the way expected utility works. Utility is simply a way of assigning numbers to our preferences; states with bigger numbers are better than states with smaller numbers by definition. If outcome A has six billion plus a few utilons, and outcome B has six billion plus a few utilons, then, under whichever utility function we're using, we are indifferent between A and B by definition. If we are not indifferent between A and B, then we must be using a different utility function.

To take one example, suppose we were faced with the choice between A, giving one dollar's worth of goods to every person in the world, or B, taking one dollar's worth of goods from every person in the world, and handing thirteen billion dollar's worth of goods to one randomly chosen person. The amount of goods in the world is the same in both cases. However, if I prefer A to B, then U(A) must be larger than U(B), as this is just a different way of saying the exact same thing.

Now, if each person has a different utility function, and we must find a way to aggregate them, that is indeed an interesting problem. However, in that case, one must be careful to refer to the utility function of persons A, B, C, etc., rather than just saying "utility", as this is an exceedingly easy way to get confused.

To take one example, suppose we were faced with the choice between A, giving one dollar's worth of goods to every person in the world, or B, taking one dollar's worth of goods from every person in the world, and handing thirteen billion dollar's worth of goods to one randomly chosen person. The amount of goods in the world is the same in both cases. However, if I prefer A to B, then U(A) must be larger than U(B), as this is just a different way of saying the exact same thing.

Precisely. However, I noted that if you had to do the same decision a trillion trillion times, the utility of both options are essentially the same. So it means that your utility does not simply sum in the naive way if you allow distribution or variance issues into the equation.

You are right that utility does not sum linearly, but there are much less confusing ways of demonstrating this. Eg., the law of decreasing marginal utility: one million dollars is not a million times as useful as one dollar, if you are an average middle-class American, because you start to run out of high-utility-to-cost-ratio things to buy.

Standard utility does sum linearly. If I offer you two chances at one util, it's implicit that the second util may have a higher dollar value if you got the first.

This argument shows that utilities that care about faireness or about variance do not sum linearly.

If you hold lottery A once, and it has utility B, that does not imply that if you hold lottery A X times, it must have a total utility of X times B. In most cases, if you want to perform X lotteries such that every lottery has the same utility, you will have to perform X different lotteries, because each lottery changes the initial conditions for the subsequent lottery. Eg., if I randomly give some person a million dollar's worth of stuff, this probably has some utility Q. However, if I hold the lottery a second time, it no longer has utility Q; it now has utility Q - epsilon, because there's slightly more stuff in the world, so adding a fixed amount of stuff matters less. If I want another lottery with utility Q, I must give away slightly more stuff the second time, and even more stuff the third time, and so on and so forth.

This sounds like equivocation; yes, the amount of money or stuff to be equally desirable may change over time, but that's precisely why we try to talk of utils. If there are X lotteries delivering Y utils, why is the total value not X*Y?

If you define your utility function such that each lottery has identical utility, then sure. However, your utility function also includes preferences based on fairness. If you think that a one-billionth chance of doing lottery A a billion times is better than doing lottery A once on grounds of fairness, then your utility function must assign a different utility to lottery #658,168,192 than lottery #1. You cannot simultaneously say that the two are equivalent in terms of utility and that one is preferable to the other on grounds of X; that is like trying to make A = 3 and A = 4 at the same time.

This post could use some polish: it's not clear what the message is (not that it's impossible to discern it, but...), and how the paragraphs are related.

Also, "It would be peculiar to argue" is a poor argument.

Can you give any advice on improving it?

I don't like utility theory at all except for making small fairly immediate choices, it is too much like the old joke about the physicist who says, "Assume a spherical cow...". If anyone could direct me to something that isn't vague and handwavey about converting real goals and desires to "utils" I would be interested. Until then, I am getting really tired of it.

In the same way, it's hopeless to try to assign probabilities to events and do a Bayesian update on everything. But you can still take advice from theorems like "Conservation of expected evidence" and the like. Formalisations might not be good for specifics, but they're good for telling you if you're going wrong in some more general manner.

I believe von Neumann and Morganstern showed that you could ask people questions about ordinal preferences (would you prefer x to y) and from a number of such questions (if they're consistent), construct cardinal preferences - which would be turning real goals and desires into utils.

Haven't various psychological experiments shown that such self-reported preferences are usually inconsistent? (I've seen various refs and examples here on LW, although I can't remember one offhand...)

Oh, sure. (Eliezer has a post on specific human inconsistencies from the OB days.) But this is a theoretical result, saying we can go from specific choices - 'revealed preferences' - to a utility function/set of cardinal preferences which will satisfy those choices, if those choices are somewhat rational. Which is exactly what billswift asked for.

(And I'd note the issue here is not what do humans actually use when assessing small probabilities, but what they should do. If we scrap expected utility, it's not clear what the right thing is; which is what my other comment is about.)

Can you translate your complaint into a problem with the independence axiom in particular?

Your second example is not a problem of variance in final utility, but aggregation of utility. Utility theory doesn't force "Giving 1 util to N people" to be equivalent to "Giving N util to 1 person". That is, it doesn't force your utility U to be equal to U1 + U2 + ... + UN where Ui is the "utility for person i".

To be concrete, suppose you want to maximise the average utility people have, but you also care about fairness so, all things equal, you prefer the utility to be clustered about its average. Then maybe your real utility function is not

U = (U[1] + .... + U[n])/n

but

U' = U + ((U[1]-U)^2 + .... + (U[n]-U)^2)/n

which is in some sense a mean minus a variance.

Precisely the model I often have in mind (except I use the standard deviation, not the variance, as it is in the same units as the mean).

But let us now see the problem with the independence axiom. Replace expected utility with phi="expected utility minus half the standard deviation".

Then if A and B are two independent probability distributions, phi(A+B) >= phi(A) + phi(B) by Jensen's inequality, as the square root is a concave function. Equality happens only if the variance of A or B is zero.

Now imagine that B and C are identical distributions with non-zero variances, and that A has no variance with phi(A) = phi(B) = phi(C). Then phi(A+B) = phi(A) + phi(B) = phi(B) + phi(C) < phi(B+C), violating independence.

(if we use variance rather than standard deviation, we get phi(2B) < 2phi(B), giving similar results)

A and B are supposed to be distributions on possible outcomes, right? What is A+B supposed to mean here? A distribution with equal mixture of A and B (i.e. 50% chance of A happening and 50% chance of B happening), or A happening followed by B happening? It doesn't seem to make sense either way.

If it's supposed to be 50/50 mixture of A and B, then phi(A+B) could be less than phi(A) + phi(B). If it's A happening followed by B happening, then Independence/expected utility maximization doesn't apply because it's about aggregating utility between possible worlds, not utility of events within a possible world.

To be technical, A and B are random variables, though you can usefully think of them as generalised lotteries. A+B represents you being entered in both lotteries.

Hum, if this is causing confusion, there is no surprise that my overal post is obscure. I'll try taking it apart to rewrite it more clearly.

To be technical, A and B are random variables, though you can usefully think of them as generalised lotteries. A+B represents you being entered in both lotteries.

That has nothing to do with the independence axiom, which is about Wei Dai's first suggestion of a 50% chance of A and a 50% chance of B (and about unequal mixtures). I think your entire post is based on this confusion.

I did wonder what Stuart meant when he started talking about adding probability distributions together. In the usual treatment, a single probability distribution represents all possible worlds, yes?

Yes, the axioms are about preferences over probability distributions over all possible worlds and are enough to produce a utility function whose expectation produces those preferences.

I think your entire post is based on this confusion.

That's how it looks to me as well.

No, it isn't. I'll write another post that makes my position clearer, as it seems I've spectacularly failed with this one :-)

The 12-billion-utils example is similar to one I mention on this page under "What about Isolated Actions?" I agree that our decision here is ultimately arbitrary and up to us. But I also agree with the comments by others that this choice can be built into the standard expected-utility framework by changing the utilities. That is, unless your complaint is, as Nick suggests, with the independence axiom's constraint on rational preference orderings in and of itself (for instance, if you agreed -- as I don't -- that the popular choices in the Allais paradox should count as "rational").

No, I don't agree that the Allais paradox should count as rational - but I don't need to use the independence axiom to get to this. I'll re-explain in a subsequent post.

For an alternative to expected utility maximization that better describes the decisions actual humans make, see prospect theory by Kahneman and Tversky.

Ah, but I'm not looking for a merely descriptive theory, but for one that was also rational and logicaly consistent. And using prospect theory for every small decision in your life will leave you worse off than using expected utility for every small decision.

There's nothing wrong I can see about using prospect theory for the mega-risk decisions, though - I wouldn't do so, but there seems no logical flaw in the idea.

As when we first discovered probability distributions in childhood, we may need to pay attention to medians, modes, variances, skewness, kurtosis or the overall shapes of the distributions. Pascal's mugger and his whole family can be confronted head-on rather than hoping the probabilities neatly cancel out.

I would love to see the mugger dispelled, but part of the attraction of standard utility theory is that it seems very clean and optimal; is there any replacement axiom which convincingly deals with the low probability pathologies? Just going on current human psychology doesn't seem very good.

A thought on Pascal's Mugging:

One source of "the problem" seems to be a disguised version of unbounded payoffs.

Mugger: I can give you any finite amount of utility.

Victim: I find that highly unlikely.

Mugger: How unlikely?

Victim: 1/(really big number)

Mugger: Well, if you give me $1, I'll give you (really big number)^2 times the utility of one dollar. Then your expected utility is positive, so you should give me the money.

The problem here is that whatever probability you give, the Mugger can always just make a better promise. Trying to assign "I can give you any finite amount of utility" a fixed non-zero probability is equivalent to assigning "I can give you an infinite amount of utility" a fixed non-zero probability. It's sneaking an infinity in through the back door, so to speak.

It's also very hard for any decision theory to deal with the problem "Name any rational number, and you get that much utility." That's because there is no largest rational number; no matter what number you name, there is another number that it is better to name. We can even come up with a version that even someone with a bounded utility function can be stumped by; "Name any rational number less than ten, and you get that much utility." 9.9 is dominated by 9.99, which is dominated by 9.999, and so on. As long as you're being asked to choose from a set that doesn't contain its least upper bound, every choice is strictly dominated by some other choice. Even if all the numbers involved are finite, being given an infinite number of options can be enough to give decision theories the fits.

It's sneaking an infinity in through the back door, so to speak.

Yes, this is precisely my own thinking - in order to give any assessment of the probability of the mugger delivering on any deal, you are in effect giving an assessment on an infinite number of deals (from 0 to infinity), and if you assign a non-zero probability to all of them (no matter how low), then you wind up with nonsensical results.

Giving the probability beforehand looks even worse if you ignore the deal aspect and simply ask what is the probability that anything the mugger says would be true? (Since this includes as a subset any promises to deliver utils.) Since he could make statements about turing machines or Chaitin's Omega etc., now you're into areas of intractable or undecidable questions!

As it happens, 2 or 3 days ago I emailed Bostrom about this. There was a followup paper to Bostrom's "Pascal's Mugging", also published in Analysis, by a Baumann, who likewise rejected the prior probability, but Baumann didn't have a good argument against it but to say that any such probability is 'implausible'. Showing how infinities and undecidability get smuggled into the mugging shores up Baumann's dismissal.

But once we've dismissed the prior probability, we still need to do something once the mugger has made a specific offer. If our probability doesn't shrink at least as quickly as his offer increases, then we can still be mugged; if it shrinks exactly as quickly or even more quickly, we need to justify our specific shrinkage rate. And that is the perplexity: how fast do we shrink, and why?

(We want the Right theory & justification, not just one that is modeled after fallible humans or ad hocly makes the mugger go away. That is what I am asking for in the toplevel comment.)

Interesting thoughts on the mugger. But you still need a theory able to deal with it, not just an understanding of the problems.

For the second part, you can get a good decision theory for the "Name any rational number less than ten, and you get that much utility," by giving you a certain fraction of negutility for each digit of your definition; there comes a time when the time wasted adding extra '9's dwarfs the gain in utility. See Tolstoy's story How Much Land Does a Man Need for a traditional literary take on this problem.

The "Name any rational number, and you get that much utility" problem is more tricky, and would be a version of the "it is rational to spend infinity in hell" problem. Basically if your action (staying in hell; or specifying your utility) give you more ultimate utility than you lose by doing so, you will spend eternity doing your utility-losing action, and never cash in on your gained utility.

[-][anonymous]14y00

I can give you any finite amount of utility.

All I want for Christmas is an arbitrarily large chunk of utility.

Do you maybe see a problem with this concept?

Replace expected utility by expected utility minus some multiple of the standard deviation, making that "some multiple" go to zero for oft repeated situations.

The mugger won't be able to stand against that, as the standard deviation of his setup is huge.

Then you would turn down free money. Suppose you try to maximize EU - k*SD.

I'll pick p < 1/2 * min(1, k^2), and offer you a bet in which you can receive 1 util with probability p, or 0 utils with probability (1-p). This bet has mean payout p and standard deviation sqrt[p(1-p)] You have nothing to lose, but you would turn down this bet.

Proof:

p < 1/2, so (1-p) > 1/2, so p < k^2/2 < k^2(1-p)

Divide both sides by (1-p): p / (1-p) < k^2

Take the square root of both sides: sqrt[p / (1-p)] < k

Multiply both sides by sqrt[p(1-p)]: p < k*sqrt[p(1-p)]

Which is equivalent to: EU < k * SD

So EU - k*SD < 0

If k is tiny, this is only a minute chance of free money. I agree that it seems absurd to turn down that deal, but if the only cost of solving Pascal's mugger is that we avoid advantageous lotteries with such minute payoffs, it seems a cost worth paying.

But recall - k is not a constant, it is a function of how often the "situation" is repeated. In this context, "repeated situation" means another lottery with larger standard deviation. I'd guess I've faced over a million implicit lotteries with SD higher than k = 0.1 in my life so far.

We can even get more subtle about the counting. For any SD we have faced that is n times greater than the SD of this lottery, we add n to 1/k.

In that setup, it may be impossible for you to actually propose that free money deal to me (I'll have to check the maths - it certainly is impossible if we add n^3 to 1/k). Basically, the problem is that k depends on the SD, and the SD depends on k. As you diminish the SD to catch up with k, you further decrease k, and hence p, and hence the SD, and hence k, etc...

Interesting example, though; and I'll try and actually formalise an example of a sensible "SD adjusted EU" so we can have proper debates about it.

That seems pretty arbitrary. You can make the mugging go away by simply penalizing his promise of n utils with a probability of 1/n (or less); but just making him go away is not a justification for such a procedure - what if you live in a universe where a eccentric god will give you that many utilons if you win his cosmic lottery?