Followup to: A summary of Savage's foundation for probability and utility.
In 1961, Daniel Ellsberg, most famous for leaking the Pentagon Papers, published the decision-theoretic paradox which is now named after him 1. It is a cousin to the Allais paradox. They both involve violations of an independence or separability principle. But they go off in different directions: one is a violation of expected utility, while the other is a violation of subjective probability. The Allais paradox has been discussed on LW before, but when I do a search it seems that the first discussion of the Ellsberg paradox on LW was my comments on the previous post 2. It seems to me that from a Bayesian point of view, the Ellsberg paradox is the greater evil.
But I should first explain what I mean by a violation of expected utility versus subjective probability, and for that matter, what I mean by Bayesian. I will explain a special case of Savage's representation theorem, which focuses on the subjective probability side only. Then I will describe Ellsberg's paradox. In the next episode, I will give an example of how not to be Bayesian. If I don't get voted off the island at the end of this episode.
Rationality and Bayesianism
Bayesianism is often taken to involve the maximisation of expected utility with respect to a subjective probability distribution. I would argue this label only sticks to the subjective probability side. But mainly, I wish to make a clear division between the two sides, so I can focus on one.
Subjective probability and expected utility are certainly related, but they're still independent. You could be perfectly willing and able to assign belief numbers to all possible events as if they were probabilities. That is, your belief assignment obeys all the laws of probability, including Bayes' rule, which is, after all, what the -ism is named for. You could do all that, but still maximise something other than expected utility. In particular, you could combine subjective probabilities with prospect theory, which has also been discussed on LW before. In that case you may display Allais-paradoxical behaviour but, as we will see, not Ellsberg-paradoxical behaviour. The rationalists might excommunicate you, but it seems to me you should keep your Bayesianist card.
On the other hand your behaviour could be incompatible with any subjective probability distribution. But you could still maximise utility with respect to something other than subjective probability. In particular, when faced with known probabilities, you would be maximising expected utility in the normal sense. So you can not exhibit any Allais-paradoxical behaviour, because the Allais paradox involves only objective lotteries. But you may exhibit, as we will see, Ellsberg-paradoxical behaviour. I would say you are not Bayesian.
So a non-Bayesian, even the strictest frequentist, can still be an expected utility maximiser, and a perfect Bayesian need not be an expected utility maximiser. What I'm calling Bayesianist is just the idea that we should reason with our subjective beliefs the same way that we reason with objective probabilities. This also has been called having "probabilistically sophisticated" beliefs, if you prefer to avoid the B-word, or don't like the way I'm using it.
In a lot of what follows, I will bypass utility by only considering two outcomes. Utility functions are only unique up to a constant offset and a positive scale factor. With two outcomes, they evaporate entirely. The question of maximising expected utility with respect to a subjective probability distribution reduces to the question of maximising the probability, according to that distribution, of getting the better of the two outcomes. (And if the two outcomes are equal, there is nothing to maximise.)
And on the flip side, if we have a decision method for the two-outcome case, Bayesian or otherwise, then we can always tack on a utility function. The idea of utility is just that any intermediate outcome is equivalent to an objective lottery between better and worse outcomes. So if we want, we can use a utility function to reduce a decision problem with any (finite) number of outcomes to a decision problem over the best and worst outcomes in question.
Savage's representation theorem
Let me recap some of the previous post on Savage's theorem. How might we defend Bayesianism? We could invoke Cox's theorem. This starts by assuming possible events can be assigned real numbers corresponding to some sort of belief level on someone's part, and that there are certain functions over these numbers corresponding to logical operations. It can be proven that, if someone's belief functions obey some simple rules, then that person acts as if they were reasoning with subjective probability. Now, while the rules for belief functions are intuitive, the background assumptions are pretty sketchy. It is not at all clear why these mathematical constructs are requirements of rationality.
One way to justify those constructs is to argue in terms of choices a rational person must make. We imagine someone is presented with choices among various bets on uncertain events. Their level of belief in these events can be gauged by which bets they choose. But if we're going to do that anyway, then, as it turns out, we can just give some simple rules directly about these choices, and bypass the belief functions entirely. This was Leonard Savage's approach 3. To quote a comment on the previous post: "This is important because agents in general don't have to use beliefs or goals, but they do all have to choose actions."
Savage's approach actually covers both subjective probability and expected utility. The previous post discusses both, whereas I am focusing on the former. This lets me give a shorter exposition, and I think a clearer one.
We start by assuming some abstract collection of possible bets. We suppose that when you are offered two bets from this collection, you will choose one over the other, or express indifference.
As discussed, we will only consider two outcomes. So all bets have the same payout, the difference among them is just their winning conditions. It is not specified what it is that you win. But it is assumed that, given the choice between winning unconditionally and losing unconditionally, you would choose to win.
It is assumed that the collection of bets form what is called a boolean algebra. This just means we can consider combinations of bets under boolean operators like "and", "or", or "not". Here I will use brackets to indicate these combinations. (A or B) is a bet that wins under the conditions that make either A win, or B win, or both win. (A but not B) wins whenever A wins but B doesn't. And so on.
If you are rational, your choices must, it is claimed, obey some simple rules. If so, it can be proven that you are choosing as if you had a assigned subjective probabilities to bets. Savage's axioms for choosing among bets are 4:
- If you choose A over B, you shall not choose B over A; and, if you do not choose A over B, and do not choose B over C, you shall not choose A over C.
- If you choose A over B, you shall also choose (A but not B) over (B but not A); and conversely, if you choose (A but not B) over (B but not A), you shall also choose A over B.
- You shall not choose A over (A or B).
- If you choose A over B, then you shall be able to specify a finite sequence of bets C1, C2, ..., Cn, such that it is guaranteed that one and only one of the C's will win, and such that, for any one of the C's, you shall still choose (A but not C) over (B or C).
Rule 1 is a coherence requirement on rational choice. It is requires your preferences to be a total pre-order. One objection to Cox's theorem is that levels of belief could be incomparable. This objection does not apply to rule 1 in this context because, as we discussed above, we're talking about choices of bets, not beliefs. Faced with choices, we choose. A rational person's choices must be non-circular.
Rule 2 is an independence requirement. It demands that when you compare two bets, you ignore the possibilty that they could both win. In those circumstances you would be indifferent between the two anyway. The only possibilities that are relevant to the comparison are the ones where one bet wins and the other doesn't. So, you ought to compare A to B the same way you compare (A but not B) to (B but not A). Savage called this rule the Sure-thing principle.
Rule 3 is a dominance requirement on rational choice. It demands that you not choose something that cannot do better under any circumstance: whenever A would win, so would (A or B). Note that you might judge (B but not A) to be impossible a priori. So, you might legitimately express indifference between A and (A or B). We can only say it is never legitimate to choose A over (A or B).
Rule 4 is the most complicated. Luckily it's not going to be relevant to the Ellsberg paradox. Call it Mostly Harmless and forget this bit if you want.
What rule 4 says is that if you choose A over B, you must be willing to pay a premium for your choice. Now, we said there are only two outcomes in this context. Here, the premium is paid in terms of other bets. Rule 4 demands that you give a finite list of mutually exclusive and exhaustive events, and still be willing to choose A over B if we take any event on your list, cut it from A, and paste it to B. You can list as many events as you need to, but it must be a finite list.
For example, if you thought A was much more likely than B, you might pull out a die, and list the 6 possible outcomes of one roll. You would also be willing to choose (A but not a roll of 1) over (B or a roll of 1), (A but not a roll of 2) over (B or a roll of 2), and so on. If not, you might list the 36 possible outcomes of two consecutive rolls, and be willing to choose (A but not two rolls of 1) over (B or two rolls of 1), and so on. You could go to any finite number of rolls.
In fact rule 4 is pretty liberal, it doesn't even demand that every event on your list be equiprobable, or even independent of the A and B in question. It just demands that the events be mutually exclusive and exhaustive. If you are not willing to specify some such list of events, then you ought to express indifference between A and B.
If you obey rules 1-3, then that is sufficient for us construct a sort of qualitative subjective probability out of your choices. It might not be quantitative: for one thing, there could be infinitessimally likely beliefs. Another thing is that there might be more than one way to assign numbers to beliefs. Rule 4 takes care of these things. If you obey rule 4 also, then we can assign a subjective probability to every possible bet, prove that you choose among bets as if you were using those probabilities, and also prove that it is the only probability assignment that matches your choices. And, on the flip side, if you are choosing among bets based on a subjective probability assignment, then it is easy to prove you obey rules 1-3, as well as rule 4 if the collection of bets is suitably infinite, like if a fair die is avaialble to bet on.
Savage's theorem is impressive. The background assumptions involve just the concept of choice, and no numbers at all. There are only a few simple rules. Even rule 4 isn't really all that hard to understand and accept. A subjective probability distribution appears seemingly out of nowhere. In the full version, a utility function appears out of nowhere too. This theorem has been called the crowning glory of decision theory.
The Ellsberg paradox
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. We will draw a ball from the urn at random. Let us bet on the colour of this ball. As above, all bets have the same payout. To be specific, let's say you get pie if you win, and a boot to the head if you lose. The first question is: do you prefer to bet that the colour will be red, or that it will be green? The second question is: do you prefer to bet that it will be (red or blue), or that it will be (green or blue)?
The most common response5 is to choose red over green, and (green or blue) over (red or blue). And that's all there is to it. Paradox! 6
30 | 60 | ||||
Red | Green | Blue | |||
|
|||||
A | pie | BOOT | BOOT | A is preferred to B | |
---|---|---|---|---|---|
B | BOOT | pie | BOOT | ||
|
|||||
C | pie | BOOT | pie | D is preferred to C | |
D | BOOT | pie | pie | ||
Paradox! |
If choices were based solely on an assignment of subjective probability, then because the three colours are mutually exclusive, P(red or blue) = P(red) + P(blue), and P(green or blue) = P(green) + P(blue). So, since P(red) > P(green) then P (red or blue) > P(green or blue), but instead we have P(red or blue) < P(green or blue).
Knowing Savage's representation theorem, we expect to get a formal contradiction from the 4 rules above plus the 2 expressed choices. Something has to give, so we'd like to know which rules are really involved. You can see that we are talking only about rule 2, the Sure-thing principle. It says we shall compare (red or blue) to (green or blue) the same way as we compare red to green.
This behaviour has been called ambiguity aversion. Now, perhaps this is just a cognitive bias. It wouldn't be the first time that people behave a certain way, but the analysis of their decisions shows a clear error. And indeed, when explained, some people do repent of their sins against Bayes. They change their choices to obey rule 2. But others don't. To quote Ellsberg:
...after rethinking all their 'offending' decisions in light of [Savage's] axioms, a number of people who are not only sophisticated but reasonable decide that they wish to persist in their choices. This includes people who previously felt a 'first order commitment' to the axioms, many of them surprised and some dismayed to find that they wished, in these situations, to violate the Sure-thing Principle. Since this group included L.J. Savage, when last tested by me (I have been reluctant to try him again), it seems to deserve respectful consideration.
I include myself in the group that thinks rule 2 is what should be dropped. But I don't have any dramatic (de-)conversion story to tell. I was somewhat surprised, but not at all dismayed, and I can't say I felt much if any prior commitment to the rules. And as to whether I'm sophisticated or reasonable, well never mind! Even if there are a number of other people who are all of the above, and even if Savage himself may have been one of them for a while, I do realise that smart people can be Just Plain Wrong. So I'd better have something more to say for myself.
Well, red obviously has a probability of 1/3. Our best guess is to apply the principle of indifference to also assign probability 1/3 to green or blue. But our best guess is not necessarily a good guess. The probabilities we assign to red, and to (green or blue), are objective. We're guessing the probability of green, and of (red or blue). It seems wise to take this difference into account when choosing what to bet on, doesn't it? And surely it will be all the more wise when dealing with real-life, non- symetrical situations where we can't even appeal to the principle of indifference.
Or maybe I'm just some fool talking jibba jabba. Against this sort of talk, the LW post on the Allais paradox presents a version of Howard Raiffa's dynamic inconsistency argument. This makes no references to internal thought processes, it is a purely external argument about the decisions themselves. As stated in that post, "There is always a price to pay for leaving the Bayesian Way." 7 This is expanded upon in an earlier post:
Sometimes you must seek an approximation; often, indeed. This doesn't mean that probability theory has ceased to apply, any more than your inability to calculate the aerodynamics of a 747 on an atom-by-atom basis implies that the 747 is not made out of atoms. Whatever approximation you use, it works to the extent that it approximates the ideal Bayesian calculation - and fails to the extent that it departs.
Bayesianism's coherence and uniqueness proofs cut both ways ... anything that is not Bayesian must fail one of the coherency tests. This, in turn, opens you to punishments like Dutch-booking (accepting combinations of bets that are sure losses, or rejecting combinations of bets that are sure gains).
Now even if you believe this about the Allais paradox, I've argued that this doesn't really have much to do with Bayesianism one way or the other. The Ellsberg paradox is what actually strays from the Path. So, does God also punish ambiguity aversion?
Tune in next time8, when I present a two-outcome decision method that obeys rules 1, 3, and 4, and even a weaker form of rule 2. But it exhibits ambiguity aversion, in gross violation of the original rule 2, so that it's not even approximately Bayesian. I will try to present it in a way that advocates for its internal cognitive merit. But the main thing 9 is that, externally, it is dynamically consistent. We do not get booked, by the Dutch or any other nationality.
Notes
- Ellsberg's original paper is: Risk, ambiguity, and the Savage axioms, Quarterly Journal of Economics 75 (1961) pp 643-669
- Some discussion followed, in which I did rather poorly. Actually I had to admit defeat. Twice. But, as they say: fool me once, shame on me; fool me twice, won't get fooled again!
- Savage presents his theorem in his book: The Foundations of Statistics, Wiley, New York, 1954.
- To compare to Savage's setup: for the two outcome case, we deal directly with "actions" or equivalently "events", here called "bets". We can dispense with "states"; in particular we don't have to demand that the collection of bets be countably complete, or even a power-set algebra of states, just that it be some boolean algebra. Savage's axioms of course have a descriptive interpretation, but it is their normativity that is at issue here, so I state them as "you shall". Rules 1-3 are his P1-P3, and 4 is P6. P4 and P7 are irrelevant in the two- outcome case. P5 is included in the background assumption that you would choose to win. I do not call this normative, because the payoff wasn't specified.
- Ellsberg originally proposed this just as a thought experiment, and canvassed various victims for their thoughts under what he called "absolutely non-expiremental conditions". He used $100 and $0 instead of pie and a boot to the head. Which is dull of course, but it shouldn't make a difference10. The experiment has since been repeated under more experimental conditions. The expirementers also invariably opt for the more boring cash payouts.
- Some people will say this isn't "really" a paradox. Meh.
- Actually, I inserted "to pay". It wasn't in the original post. But it should have been.
- Sneak preview
- As a great decision theorist once said, "Stupid is as stupid does."
- ...or should it? Savage's rule P4 demands that it shall not. And the method I have in mind obeys this rule. But it turns out this is another rule that God won't enforce. And that's yet another post, if I get to it at all.
As kim0 notes, the choices make sense if you are risk-averse; the standard choices are the ones with the least variance. (This seems to be identified as "ambiguity aversion" rather than risk-aversion, but I think it is clearly a sort of risk aversion). However, risk aversion isn't even necessary to support choices similar to these.
Let me propose a different game that should give rise to the same choices.
Let's imagine there is an urn containing 90 balls. 30 of them are red, and the other 60 are either green or blue, in unknown proportion. Whatever color you specify, you get a dollar for each ball of that color that is in the urn. The first question is: A) do you prefer to pick red, or B) do you prefer to pick green? The second question is: C) do you prefer to pick (red or blue), or D) do you prefer to pick (green or blue)?
Here, it is clear that the expected values work out the same as above. Picking any one color has an expected payout of $30, and picking 2 colors has an expected payout of $60. However, option A is a sure $30, while option B is between $0 and $60. Option C is between $30 and $90, while option D is a sure $60.
If you have the usual diminishing marginal utility over money, then A and D is actually the rational choice for expected utility maximization, without even taking into account risk/ambiguity aversion.
But you're extracting one ball. AFAICT, for any probability distribution such that P(there are n blue balls and 60 - n green balls) = P(there are 60 - n blue balls and n green balls) for all n, and for any utility function, E(utility|I bet “red”) equals E(utility|I bet “green”) and E(utility|I bet “red or blue”) equals E(utility|I bet “green or blue”). (The only values of your utility function that matter are those for “you win the bet” and “you lose the bet”, and given that utility functions are only defined up to positive-scale affine transformations, you can normalize them to 1 and 0 respectively.)