Would you prefer a 50% chance of gaining €10, one chance in a million off gaining €5 million, or a guaranteed €5? The standard position on Less Wrong is that the answer depends solely on the difference between cash and utility. If your utility scales less-than-linearly with money, you are risk averse and should choose the last option; if it scales more-than-linearly, you are risk-loving and should choose the second one. If we replaced €’s with utils in the example above, then it would simply be irrational to prefer one option over the others.
There are mathematical proofs of that result, but there are also strong intuitive arguments for it. What’s the best way of seeing this? Imagine that X1 and X2 were two probability distributions, with mean u1 and u2 and variances v1 and v2. If the two distributions are independent, then the sum X1 + X2 has mean u1 + u2, and variance v1 + v2.
Now if we multiply the returns of any distribution by a constant r, the mean scales by r and variance scales by r2. Consequently if we have n probability distributions X1, X2, ... , Xn representing n equally expensive investments, the expected average return is (Σni=1 ui)/n, while the variance of this average is (Σni=1 vi)/n2. If the vn are bounded, then once we make n large enough, that variance must tend to zero. So if you have many investments, your averaged actual returns will be, with high probability, very close to your expected returns.
Thus there is no better strategy than to always follow expected utility. There is no such thing as sensible risk-aversion under these conditions, as there is no actual risk: you expect your returns to be your expected returns. Even if you yourself do not have enough investment opportunities to smooth out the uncertainty in this way, you could always aggregate your own money with others, through insurance or index funds, and achieve the same result. Buying a triple-rollover lottery ticket may be unwise; but being part of a consortium that buys up every ticket for a triple rollover lottery is just a dull, safe investment. If you have altruistic preferences, you can even aggregate results across the planet simply by encouraging more people to follow expected returns. So, case closed it seems; departing from expected returns is irrational.
But the devil’s detail is the condition ‘once we make n large enough’. Because there are risk distributions so skewed that no-one will ever be confronted with enough of them to reduce the variance to manageable levels. Extreme risks to humanity are an example; killer asteroids, rogue stars going supernova, unfriendly AI, nuclear war: even totalling all these risks together, throwing in a few more exotic ones, and generously adding every single other decision of our existence, we are nowhere near a neat probability distribution tightly bunched around its mean.
To consider an analogous situation, imagine having to choose between a project that gave one util to each person on the planet, and one that handed slightly over twelve billion utils to a randomly chosen human and took away one util from everyone else. If there were trillions of such projects, then it wouldn’t matter what option you chose. But if you only had one shot, it would be peculiar to argue that there are no rational grounds to prefer one over the other, simply because the trillion-iterated versions are identical. In the same way, our decision when faced with a single planet-destroying event should not be constrained by the behaviour of a hypothetical being who confronts such events trillions of times over.
So where does this leave us? The independence axiom of the von Neumann-Morgenstern utility formalism should be ditched, as it implies that large variance distributions are identical to sums of low variance distributions. This axiom should be replaced by a weaker version which reproduces expected utility in the limiting case of many distributions. Since there is no single rational path available, we need to fill the gap with other axioms – values – that reflect our genuine tolerance towards extreme risk. As when we first discovered probability distributions in childhood, we may need to pay attention to medians, modes, variances, skewness, kurtosis or the overall shapes of the distributions. Pascal's mugger and his whole family can be confronted head-on rather than hoping the probabilities neatly cancel out.
In these extreme cases, exclusively following the expected value is an arbitrary decision rather than a logical necessity.
Yes, this is precisely my own thinking - in order to give any assessment of the probability of the mugger delivering on any deal, you are in effect giving an assessment on an infinite number of deals (from 0 to infinity), and if you assign a non-zero probability to all of them (no matter how low), then you wind up with nonsensical results.
Giving the probability beforehand looks even worse if you ignore the deal aspect and simply ask what is the probability that anything the mugger says would be true? (Since this includes as a subset any promises to deliver utils.) Since he could make statements about turing machines or Chaitin's Omega etc., now you're into areas of intractable or undecidable questions!
As it happens, 2 or 3 days ago I emailed Bostrom about this. There was a followup paper to Bostrom's "Pascal's Mugging", also published in Analysis, by a Baumann, who likewise rejected the prior probability, but Baumann didn't have a good argument against it but to say that any such probability is 'implausible'. Showing how infinities and undecidability get smuggled into the mugging shores up Baumann's dismissal.
But once we've dismissed the prior probability, we still need to do something once the mugger has made a specific offer. If our probability doesn't shrink at least as quickly as his offer increases, then we can still be mugged; if it shrinks exactly as quickly or even more quickly, we need to justify our specific shrinkage rate. And that is the perplexity: how fast do we shrink, and why?
(We want the Right theory & justification, not just one that is modeled after fallible humans or ad hocly makes the mugger go away. That is what I am asking for in the toplevel comment.)