The Allais paradox assumes that you believe the probabilities you're given. A supposedly 33/34 chance of $27000 is worth less than a guarantee of $24000, because that 1/34 chance might conceal a much larger chance of being conned. If you take the $24000 offer and don't receive the money, you can cry foul; if you take the $27000 offer, and the 1/34 chance turns out to be implemented with weighted dice, then you can't prove you were cheated. When choosing between a 33% and 34% chance, that isn't a factor, because neither choice protects you.
Least Convenient Possible World: Either you do trust the odds (because it's, say, a regulated casino game), or the probability you estimate of this being a con works out so that it's an Allais situation. You can't avoid it in general!
This is a misapplication of the Least Convenient Possible World principle. Yes, it is possible to construct an Allais-like problem in the least convenient possible world. However, the evidence that the Allais paradox exists does not come from a world of your choosing, but from the worlds that survey-takers construct in their minds. You can't say "least convenient possible world!" when data has already been collected from an inconvenient one!
The researcher could take you to a large state-certified casino (which I think we can trust not to rig their games) and offer you two options: A) pay you $55 straight up, or B) place both a $30 bet on red and a $30 bet on black (this nets you $60 unless it lands on 0 or 00, so a 18/19 chance of $60)
She could also offer you other combinations of bets that add up to the second pair of Allais gambles.
Do you predict that if an Allais experiment were done in this sort of trustworthy situation, the effect would disappear?
Do you predict that if an Allais experiment were done in this sort of trustworthy situation, the effect would disappear?
Yes, I do.
Well, there's only one thing to be done then. I'll be waiting at Caesar's Palace; you bring the experimental funds.
Anyhow, the primary reason I disagree with you is that most people just don't expected to be cheated outright in psychology experiments; again and again it's found that the majority of subjects trust the experimenters.
Take for example the study on guilt where the volunteer signed up more often for a painful experiment if he thought he had broken an expensive machine, when in fact it was rigged to appear to break. You'd find different behavior if most of the subjects were suspicious at the outset.
I don't have primary literature with me now, so this is from the Wikipedia article
Allais asserted that, presented with the choice between 1A and 1B, most people would choose 1A, and presented with the choice between 2A and 2B, most people would choose 2B. This has been borne out in various studies involving hypothetical and small monetary payoffs, and recently with health outcomes.
You don't expect to be cheated in a hypothetical. You don't expect to be cheated by a doctor giving probabilities of different outcomes.
ETA: Here's an abstract, but the paper itself is gated.
ETA2: Paper!
Looking at the original 'Allais Paradox' post - under what theorem is the reduction of uncertainty by 100% equivalent to the reduction of uncertainty by 1/67th?
It takes energy to plan ahead - the energy required to plan ahead with 100% certainty of outcome is considerably more than the energy required to plan ahead with 99% certainty. But there's no such difference in energy consumption planning between the possibilities inherent in 67% and 66% - those are functionally equivalent.
So, um, why is this result even slightly surprising?
Edit: - Now, what would be interesting would be the question of the decisions made if the options are $24K with 94% probability to $27K with 93% probability, and variants thereof where the reduction in uncertainty exactly balances out the increase in value.
Can we please stop misreading the results of Allais experiments? As I tried to explain on Awful Austrians, you simply cannot draw the standard conclusion from the experiment, and this was explained early on in the first OB thread by gray area.
To summarize: the choice you are making fundamentally changes depending on whether it's one-shot or repeated as many times as you want. The standard choice made by [EDIT typo] test subjects cannot be shown to be irrational unless and until you give them the choices repeatedly, in which cases they actually get the expected return. Otherwise, there is no money pump, no loss, no opportunity for being cheated.
As conducted, the experiment gives the subjects two free lottery tickets. If getting two free lottery tickets is exploitation, I don't want to be empowered. (You can quote me on that.)
The money pump is but an illustration, not the one true definitive argument for the standard decision theory. For even if this particular Allais gamble isn't repeated, you're going to make many, many more decisions under uncertainty in your life (which job to take, what to study, where to live, &c.). Choosing the option with highest expected utility (for whatever your utility function is) is the way you ensure optimal long-run outcomes; this remains true whether or not someone is constantly hanging around asking if you want 34% chance of $24 or 33% chance of $27.
But what this shows is that people do not necessarily have a single utility function for all circumstances. It's possible for someone to prefer A to B to C to A in situations where any binary choice of those excludes the others from immediate possibility, and the only reason to disallow this, as far as I can see, is to try to force the territory to fit the map.
I'm not sure what you mean by disallow. As a purely descriptive matter about how actually existing humans actually are: I agree, people don't have a single utility function for all circumstances; people don't have utility functions at all! As a normative matter--well, I just interpret this as meaning that humans are fairly stupid on an absolute scale. If it turns out that our deepest hearts' desires are contradictory when rigorously listed out, then this is a unspeakably horrible tragedy from our perspective--but what can I say? Something has to give; it's not up to us.
It's only a tragedy if it's otherwise possible to get everything we want... but actually getting what we want is a tragedy for humans anyway, so that's nothing worse. As for humans being stupid on an absolute scale, I don't necessarily disagree, but I don't think that examination of goals can tell you that. The only way to make a choice is by reference to a goal, so you can't rationally choose your goal(s).
Obviously, one solution is to actually construct a utility function in money, and apply it rigorously to all decisions. Linear well below your net worth, and logarithmic above it is usually a good place to start.
Are you talking about changes in wealth, or states of total net worth?
Also, if you are an extreme altruist, your utility potential will be an approximately linear function of your state of total net worth.
I was talking about changes in wealth -- states would just be a straightforward logarithm.
And yes, if you're giving to charity, your marginal utility on cash should just be whatever utilons the charity is purchasing, which shouldn't change much as you get wealthier
I can see why you suggest it, but straightforward logarithm isn't quite going to work, because of the behaviour around zero. I don't have a better suggestion off the top of my head.
Great post. It could be an interesting exercise to use techniques like this to infer utility functions on money. Generate and answer these sorts of questions until you exhibit somewhat stable preferences, tweak the dollar amounts, repeat, plot the results.
Very nice! I've tried a few simpler debiasing strategies like this in the past, particularly in order to try to measure absolute rather than relative differences in prices, but I've never articulated it properly. There's room for a series of articles along these lines
The "winning nothing" is implied, and gets redundant here. Also, couldn't the math be aligned better to our current system (base ten)?
The Allais Paradox, though not actually a paradox, was a classic experiment which showed that decisions made by humans do not demonstrate consistent preferences. If you actually want to accomplish something, rather than simply feel good about your decisions, this is rather disturbing.
When something like the Allais Paradox is presented all in one go, it's fairly easy to see that the two cases are equivalent, and ensure that your decisions are consistent. But if I clone you right now, present one of you with gamble 1, and one of you with gamble 2, you might not fare so well. The question is how to consistently advance your own preferences even when you're only looking at one side of the problem.
Obviously, one solution is to actually construct a utility function in money, and apply it rigorously to all decisions. Logarithmic in your total net worth is usually a good place to start. Next you can assign a number of utilons to each year you live, a negative number to each day you are sick, a number for each sunrise you witness...
I would humbly suggest that a less drastic strategy might be to familiarize yourself with the ways in which you can transform a decision which should make no difference unto decision theory, and actually get in the habit of applying these transformations to decisions you make in real life.
So, let us say that I present you with Allais Gamble #2: choose between A: 34% chance of winning $24,000, and 66% chance of winning nothing, and B: 33% chance of winning $27,000, and 67% chance of winning nothing.
Before snapping to a judgment, try some of the following transforms:
Assume your decision matters:
The gamble, as given, contains lots of probability mass in which your decision will not matter one way or the other -- shave it off!
Two possible resulting scenarios:
A: $24,000 with certainty, B: 33/34 chance of $27,000
Or, less obviously: I spin a wheel with 67 notches, 34 marked A and 33 marked B. Choose A and win $24,000 if the wheel comes up A, nothing otherwise. Choose B and win $27,000 if the wheel comes up B, nothing otherwise.
Assume your decision probably doesn't matter:
Tiny movements away from certainty tend to be more strongly felt -- try shifting all your probabilities down and see how you feel about them.
A: 3.4% chance of winning $24,000, 96.6% chance of nothing. B: 3.3% chance of winning $27,000, 96.7% chance of nothing.
Convert potential wins into potential losses, and vice versa:
Suppose I simply give you the $24,000 today. You spend the rest of the day counting your bills and planning wonderful ways of spending it. Tomorrow, I come to you and offer you an additional $3,000, with the proviso that there is a 1/34 chance that you will lose everything.
(If 1/34 is hard to emotionally weight, also feel free to imagine a fair coin coming up heads five times in a row)
Or, suppose I give you the full $27,000 today, and tomorrow, a mugger comes, grabs $3,000 from your wallet, and then offers it back for a 1/34 shot at the whole thing.
I'm not saying that there is one way of transforming a decision such that your inner Bayesian master will suddenly snap to attention and make the decision for you. This method is simply a diagnostic. If you make one of these transforms and find the emotional weight of the decision switching sides, something is going wrong in your reasoning, and you should fight to understand what it is before making a decision either way.