Would you prefer a 50% chance of gaining €10, one chance in a million off gaining €5 million, or a guaranteed €5? The standard position on Less Wrong is that the answer depends solely on the difference between cash and utility. If your utility scales less-than-linearly with money, you are risk averse and should choose the last option; if it scales more-than-linearly, you are risk-loving and should choose the second one. If we replaced €’s with utils in the example above, then it would simply be irrational to prefer one option over the others.
There are mathematical proofs of that result, but there are also strong intuitive arguments for it. What’s the best way of seeing this? Imagine that X1 and X2 were two probability distributions, with mean u1 and u2 and variances v1 and v2. If the two distributions are independent, then the sum X1 + X2 has mean u1 + u2, and variance v1 + v2.
Now if we multiply the returns of any distribution by a constant r, the mean scales by r and variance scales by r2. Consequently if we have n probability distributions X1, X2, ... , Xn representing n equally expensive investments, the expected average return is (Σni=1 ui)/n, while the variance of this average is (Σni=1 vi)/n2. If the vn are bounded, then once we make n large enough, that variance must tend to zero. So if you have many investments, your averaged actual returns will be, with high probability, very close to your expected returns.
Thus there is no better strategy than to always follow expected utility. There is no such thing as sensible risk-aversion under these conditions, as there is no actual risk: you expect your returns to be your expected returns. Even if you yourself do not have enough investment opportunities to smooth out the uncertainty in this way, you could always aggregate your own money with others, through insurance or index funds, and achieve the same result. Buying a triple-rollover lottery ticket may be unwise; but being part of a consortium that buys up every ticket for a triple rollover lottery is just a dull, safe investment. If you have altruistic preferences, you can even aggregate results across the planet simply by encouraging more people to follow expected returns. So, case closed it seems; departing from expected returns is irrational.
But the devil’s detail is the condition ‘once we make n large enough’. Because there are risk distributions so skewed that no-one will ever be confronted with enough of them to reduce the variance to manageable levels. Extreme risks to humanity are an example; killer asteroids, rogue stars going supernova, unfriendly AI, nuclear war: even totalling all these risks together, throwing in a few more exotic ones, and generously adding every single other decision of our existence, we are nowhere near a neat probability distribution tightly bunched around its mean.
To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical. In the same way, our decision when faced with a single planet-destroying event should not be constrained by the behaviour of a hypothetical being who confronts such events trillions of times over.
So where does this leave us? The independence axiom of the von Neumann-Morgenstern utility formalism should be ditched, as it implies that large variance distributions are identical to sums of low variance distributions. This axiom should be replaced by a weaker version which reproduces expected utility in the limiting case of many distributions. Since there is no single rational path available, we need to fill the gap with other axioms – values – that reflect our genuine tolerance towards extreme risk. As when we first discovered probability distributions in childhood, we may need to pay attention to medians, modes, variances, skewness, kurtosis or the overall shapes of the distributions. Pascal's mugger and his whole family can be confronted head-on rather than hoping the probabilities neatly cancel out.
In these extreme cases, exclusively following the expected value is an arbitrary decision rather than a logical necessity.
"To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical."
That's not the way expected utility works. Utility is simply a way of assigning numbers to our preferences; states with bigger numbers are better than states with smaller numbers by definition. If outcome A has six billion plus a few utilons, and outcome B has six billion plus a few utilons, then, under whichever utility function we're using, we are indifferent between A and B by definition. If we are not indifferent between A and B, then we must be using a different utility function.
To take one example, suppose we were faced with the choice between A, giving one dollar's worth of goods to every person in the world, or B, taking one dollar's worth of goods from every person in the world, and handing thirteen billion dollar's worth of goods to one randomly chosen person. The amount of goods in the world is the same in both cases. However, if I prefer A to B, then U(A) must be larger than U(B), as this is just a different way of saying the exact same thing.
Now, if each person has a different utility function, and we must find a way to aggregate them, that is indeed an interesting problem. However, in that case, one must be careful to refer to the utility function of persons A, B, C, etc., rather than just saying "utility", as this is an exceedingly easy way to get confused.
To take one example, suppose we were faced with the choice between A, giving one dollar's worth of goods to every person in the world, or B, taking one dollar's worth of goods from every person in the world, and handing thirteen billion dollar's worth of goods to one randomly chosen person. The amount of goods in the world is the same in both cases. However, if I prefer A to B, then U(A) must be larger than U(B), as this is just a different way of saying the exact same thing.
Precisely. However, I noted that if you had to do the same decision a trillion tri... (read more)