18 19 January 2008 03:05AM

Choose between the following two options:

1A.  $24,000, with certainty. 1B. 33/34 chance of winning$27,000, and 1/34 chance of winning nothing.

Which seems more intuitively appealing?  And which one would you choose in real life?

Now which of these two options would you intuitively prefer, and which would you choose in real life?

2A. 34% chance of winning $24,000, and 66% chance of winning nothing. 2B. 33% chance of winning$27,000, and 67% chance of winning nothing.

The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953.  I've modified it slightly for ease of math, but the essential problem is the same:  Most people prefer 1A > 1B, and most people prefer 2B > 2A.  Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.

This is a problem because the 2s are equal to a one-third chance of playing the 1s.  That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.

Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence:  If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.

All the axioms are consequences, as well as antecedents, of a consistent utility function.  So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes.  And indeed, you can't simultaneously have:

• U($24,000) > 33/34 U($27,000) + 1/34 U($0) • 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money. Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life. (How naive, how foolish, how simplistic is Bayesian decision theory...) Surely, the certainty of having$24,000 should count for something.  You can feel the difference, right?  The solid reassurance?

(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B".  Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)

"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?"  Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern.  Yet who says that things must be neat and tidy?

Why fret about elegance, if it makes us take risks we don't want?  Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc.  Okay, but why do we have to do that?  Why not make up more palatable rules instead?

There is always a price for leaving the Bayesian Way.  That's what coherence and uniqueness theorems are all about.

In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning.  You become a money pump.

Suppose that at 12:00PM I roll a hundred-sided die.  If the die shows a number greater than 34, the game terminates.  Otherwise, at 12:05PM I consult a switch with two settings, A and B.  If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you$27,000 unless the die shows "34", in which case I pay you nothing.

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference.  The switch starts in state A.  Before 12:00PM, you pay me a penny to throw the switch to B.  The die comes up 12.  After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.

I have taken your two cents on the subject.

If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...

(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)

Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'ĂŠcole amĂŠricaine.  Econometrica, 21, 503-46.

Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.

Sort By: Old
Comment author: 19 January 2008 03:25:27AM 14 points [-]

For $24,000, you can have my two cents. ;) Comment author: 19 January 2008 03:37:26AM 6 points [-] Yes, philosophers, and others, do often too easily accept the advice of strong intuitions, forgetting that strong intuitions often conflict in non-obvious ways. Comment author: 27 February 2012 05:17:40AM * 2 points [-] Yes, exactly. For instance, many philosophers invoke Parfit's "repugnant conclusion" as a decisive objection to certain forms of consequentialism, overlooking the fact that all moral theories, when applied to scenarios involving different numbers of people, have implications that are arguably similarly repugnant. Comment author: 19 January 2008 03:39:10AM 4 points [-] The idea is that$ amount equals your utility, while in reality the history of how you got this amount also matters (regret, emotions, etc.).

There's no paradox here - as your utility expressed in $just doesn't match utility of the subjects. As for money pump - you just have a win win situation - you earn money, and the subjects earn good feelings. Comment author: 19 January 2008 03:45:46AM 6 points [-] If I knew the offer wouldn't be repeated, I might take 1A because I'd really rather not have to explain to people how I lost$24,000 on a gamble.

Comment author: 10 December 2011 07:32:20AM 0 points [-]

This was my thought exactly. If I was given the option to keep the rest private if I lost, 1A would be a distinctly preferable choice. If I had a 1/34 chance of having to explain how I "lost" $24,000 vs an average loss of$2,200, I might well take choice 1B. (at a later time in my life, when I could afford to lose $2,200, and had significant financial risk from being perceived ask a risk-taker with money). Comment author: 19 January 2008 03:48:34AM 15 points [-] Actually, that makes me think of another explanation besides overreaction to small probabilities: if a person takes 1B and loses, they know they would have won if they'd chosen differently. If they take 2B and lose, they can tell themselves (and others) they probably would have lost anyway. Comment author: 17 December 2012 01:15:10AM 2 points [-] Ok that is exactly my line of thinking and why i can't understand the broader point of this argument. Yes I can see the statistical similarity that makes it "the same"- but the situation is totally different in that one offers "certain win or risk" and the other is "risk vs risk" with a barely noticeable difference between them. So my decision on both questions goes like this 1a > 1b because even if i was offered MUCH less, i'd still likely take that deciding that i'm not greedy and free money always feels good but giving away free money (by trying to get a bit more) always feels foolish and greedy. 2b > 2a because if the statistic played out over 100 times, the average person will think it was equal value between them- unless they logged the statistics to find the slight difference. Therefore if it takes that much attention to feel the difference it's easy to pretend they are the same risk but one is 11.12% more money- which is a lot easier to notice without logging statistics. I don't see how these decisions conflict with each other. Comment author: 19 January 2008 03:53:17AM 1 point [-] A bird in the hand... Certainty is a form of utility, too. Comment author: 28 October 2011 12:33:48AM 0 points [-] That goes hand in hand with his comments about complexity. The straightforward expected utility analysis doesn't include the cost of the analysis into the analysis. Nor the increased cost to all subsequent analyses for the uncertainty. We have limited computational power for executive functions. No doubt we have utility built into us to conserve those limited resources. Most people hate uncertainty and thinking, and they hate it much more than we do. I doubt I'm the only one here who has noticed that. Comment author: 28 October 2011 01:23:06AM -1 points [-] For me, the choice between 1A and 1B would depend on how badly I needed the money, which is why I disagree with Eliezer when the says that "marginal utility of the money doesn't count". For example, let's say I needed$20,000 in order to keep a roof over my head, food on my plate, and to generally survive. In this case, my penalty for failure is quite high, and IMO it would be more rational for me to take 1A. Sure, I could win more money if I picked 1B, but I could also die in that case. Thus, my utility in case of 1B would be something like

33/34 U($27,000, alive) + 1/34 U($0, dead)

and U($anything, dead) is a very negative number. On the other hand, if I was a billionaire who makes$20,000 per second just by existing, then I would either pick 1B, or refuse to play the game altogether, because my time could be better spent on other things.

Comment author: 28 October 2011 01:57:07AM 3 points [-]

The paradox is that, if you need the 20k to survive, then you should prefer 2A to 2B, because the extra 3k 33% of the time doesn't outweigh an additional 1% chance of dying.

If someone prefers A in both cases, and B in both cases, they can have a consistent utility function. When someone prefers A in one case, and B in another, then they cannot have a consistent utility function.

Comment author: 28 October 2011 02:17:54AM 0 points [-]

Right, I didn't mean to imply that it was. But Eliezer seemed to be saying that picking 1A is irrational in general, in addition to the paradox, which is the notion that I was disputing. It's possible that I misinterpreted him, however.

Comment author: 28 October 2011 04:26:49AM 3 points [-]

He makes it clearer in comments.

What Caledonian is discussing is the certainty effect- essentially, having a term in your utility function for not having to multiply probabilities to get an expected value. That's different from risk aversion, which is just a statement that the utility function is concave.

Comment author: 19 January 2008 04:13:09AM 4 points [-]

Risk and cost of capital introduce very strange twists on expected utility.

Assume that living has a greater expected utility to me than any monetary value. If I need a $20,000 operation within the next 3 hours to live, I have no other funding, and you make me offer 1, it is completely rational and unbiased to take option 1A. It is the difference between a 100% of living and a 97% chance of living. If I have$1,000,000,000 in the bank and command of legal or otherwise armed forces, I may just have you killed - for I would not tolerate such frivolous philosophizing.

Comment author: 19 January 2008 04:29:50AM 2 points [-]

I think defenses of the subject's choices by recourse to nonmonetary values is missing the point. Anything can be rational with a sufficiently weird utility function. The question is, if subjects understood the decision theory behind the problem, would they still make the same choice? After seeing a valid argument that your preferences make you a money pump, you certainly could persist in your original judgment, by insisting that your feelings make your first judgment the right one.

But seriously?---why?

Comment author: 19 January 2008 04:47:48AM 0 points [-]

Since people only make a finite number of decisions in their lifetime, couldn't their utility function specify every decision independently? (You could have a utility function that is normal except that it says that everything you hear being called 1A is preferable to 1B, and anything you hear being called 2B is preferable to 2A. If this contradicts your normal utility function, this rule is always more important. Even if 2B leads to death, you still choose 2B.)

The utility function would be impossible to come up with in advance, but it exists.

Comment author: 19 January 2008 05:00:39AM 3 points [-]

My intuitions match the stated naive intuitions, but I reject your assertion that the pair of preferences are inconsistent with Bayesian probability theory.

You really underestimate the utility of certainty. "Nainodelac and Tarleton Nick"'s example in these comments about the operation is a perfect counter.

With a 33% vs. 34% chance, the impact on your life is about the same, so you just do the straightforward probability calculation for expected value and take the maximum.

But when offered 100% of some positive outcome, vs. a probability of nothing, it seems perfectly rational to prefer the guarantee. Maximizing expected dollar winnings is not necessarily the same as maximizing utility. And you're right, the issue isn't decreasing returns. But the issue _is_ the cost of risk.

Your money pump doesn't convince me either. I'd be happy to pay the two cents, both times, and not regret the cost at the end, just as I don't regret paying for insurance even if I happen not to get sick.

Comment author: 19 January 2008 05:25:39AM 2 points [-]

Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B.

I don't understand why I would pay you a penny to throw the switch gefore 12:00?

Comment author: 19 January 2008 05:30:25AM 1 point [-]

Since I know myself, I know what I will do after midnight (pay to switch it to A*), and so I resign myself to doing it immediately (i.e., leaving the switch at A) so as to save either one cent or two, depending on what happens. I will do this even if I share Don's intuition about certainty. Why pay before midnight to switch it to B if I know that after midnight I will pay to switch it back to A*?

*[if the first die comes up 1 to 34]

Comment author: 19 January 2008 06:00:24AM 0 points [-]

I think I missed something on the algebraic inconsistency part...

If there is some rational independent utility to certainty, the algebraic claims should be more like this:

* U($24,000) + U(Certainty) > 33/34 U($27,000) + 1/34 U($0) * 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U($0) This seems consistent so long as U(Certainty) > 1/34 U($27,000).

I'm not committed to the notion there is a rational independent value to certainty, I'm just not seeing how it can be dismissed with quick algebra. Maybe that wasn't your goal. Forgive me if this is my oversight.

Comment author: 19 January 2008 06:21:40AM 0 points [-]

This reminds me of the foolish decisions on "deal or no deal". People would fail to follow their own announced utility.

Comment author: 19 January 2008 06:32:50AM 4 points [-]

When we speak of an inherent utility of certainty, what do we mean by certainty? An actual probability of unity, or, more reasonably, something which is merely very much certain, like probability .999? If the latter, then there should exist a function expressing the "utility bonus for certainty" as a function of how certain we are. It's not immediately obvious to me how such a function should behave. If probability 0.9999 is very much more preferable to probability 0.8999 than probability 0.5 is preferable to probability 0.4, then is 0.5 very much more preferable to 0.4 than 0.2 is to 0.1?

Comment author: 19 January 2008 06:52:41AM 0 points [-]

It's rational to take the certain outcome if gambling causes psychological stress. Notwithstanding that stress is intrinsically unpleasant, it increases your risk of peptic ulcers and stroke, which could easily cancel out the expected gain.

Comment author: 15 January 2012 07:37:24PM 0 points [-]

But such psychological stress arises from your perception of reality. If it is caused by an erroneous perception of reality, then the rational thing to do is correct your perception, not take the error for granted. If you are certain that you made the right decision, then you shouldn't feel stressed when you "lose".

Comment author: 19 January 2008 07:08:52AM -2 points [-]

If you crunch the numbers differently, you can come to different conclusions. For example, if I choose 1B over 1A, I have a 1 in 34 chance of getting burned. If I choose 2B over 2A, my chance of getting burned is only 1 in 100.

Comment author: 19 January 2008 07:15:15AM 0 points [-]

James D. Miller has a proposal for Lottery Tickets that Usuallly Pay Off.

Robin, were you thinking of a certain colleague of yours when you mentioned accepting intuition too readily?

Comment author: 19 January 2008 08:36:22AM -1 points [-]

Risk aversion, and the degree to which it is felt, is a personality trait with high variance between individuals and over the lifespan. To ignore it in a utility calculation would be absurd. Maurice Allais should have listened to his homonym Alphonse Allais (no apparent relation), humorist and theoretician of the absurd, who famously remarked "La logique mĂ¨ne Ă  tout Ă  condition d'en sortir". Logic leads to everything, on condition it don't box you in.

Comment author: 19 January 2008 09:20:38AM 1 point [-]

I confess, the money pump thing sometimes strikes me as ... well... contrived. Yes, in theory, if one's preferences violate various rules of rationality (acyclicity being the easiest), one could conceivably be money-pumped. But, uh, it never actually happens in the real world. Our preferences, once they violate idealized axioms, lead to messes in highly unrealistic situations. Big deal.

Comment author: 19 January 2008 11:02:34AM 1 point [-]

I am intuitively certain that I'm being money-pumped all the time. And I'm very, very certain that transaction costs of many forms money-pump people left and right.

Comment author: 26 December 2011 10:03:04AM 1 point [-]
Comment author: 19 January 2008 11:09:51AM -1 points [-]

As long as it was only one occasion, I wouldn't make the effort to cross the room for two pennies. If I'm playing the game just once, and I feel a one-off payment of 2p tends to zero, I'll play with you, sure. ÂŁ1 for a lottery ticket crosses the threshold of palpability, even playing once. I can get a newspaper for a pound. Is this irrational? I hope not.

Comment author: 19 January 2008 11:30:54AM 2 points [-]

When I made the (predictable, wrong) choice, I wasn't using probability at all. I was using intuitive rules of thumb like: "don't gamble", "treat small differences in probability as unimportant", and "if you have to gamble against similar odds, go for the larger win".

How do you find time to use authentic probability math for all your chance-taking decisions?

Comment author: 17 December 2012 01:48:30AM *  2 points [-]

That's exactly how i felt too.

"Don't gamble" is the key. 1a allowed me to indulge that even if i was boxed into being in the game.

So in question 2 I want to follow "don't gamble" but both are gambling. Additionally, both gambles would feel the same risk to most human who didn't record statistics (other than subconscious and normal memory effected observations) so could be cheaply rounded off to say they are the same. If they are "the same" but 1 pays more money...

Oh one more point "easy come easy go". If you can lose 2 either way you won't feel like you ever had anything. However even before you pick 1a and they physically hand you the money, it's already yours (by virtue of the ability to choose 1a ) until you choose 1b and introduce the probability that you won't be paid. I say already yours because if you are guaranteed the choice of 1a forever and unconditionally unless until you choose 1b- that's no less "having money" than when you "have money" but it's in your pocket or in your wallet in the other room. It might not be your money anymore if you fling your wallet out the window hoping it will boomerang back (1b) but it was until you introduced that gamble rather than just choosing to clutch the wallet (1a).

I feel like i must be missing the point or something because they seems so obviously right...

Comment author: 19 January 2008 12:06:31PM 4 points [-]

The large sums of money make a big difference here. If it were for dollars, rather than thousands of dollars, I'd do what utility theory told me to do, and if that meant I missed out on $27 due to a very unlucky chance then so be it. But I don't think I could bring myself to do the same for life-changing amounts like those set out above; I would kick myself so hard if I took the very slightly riskier bet and didn't get the money. Comment author: 19 January 2008 12:14:32PM 1 point [-] My experience of watching game shows such as 'Deal or No Deal' suggests that people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it, as if it would make their life worse than before they were selected to appear on the show. It seems this fear is in some sense inversely proportional to the 'socially expected' probability of the bad event - so if the player is aware that very few players win less than ÂŁ1 on the show, they start getting very uncomfortable if there is a high chance of this happening to them, because winning less than ÂŁ1 is somehow embarrassing, and winning 1p is somehow significantly worse than winning say 50p. In contrast, on game shows where there's a 'double or nothing' option at the end, it is socially accepted that there's a high chance of winning nothing, so players seem to be much more sanguine about the gamble. I think the psychology of 'face' has a lot to answer for when it comes to such decisions. Comment author: 19 January 2008 12:50:08PM 10 points [-] People don't maximize expectations. Expectation-maximizing organisms -- if they ever existed -- died out long before rigid spines made of vertebrae came on the scene. The reason is simple, expectation maximization is not robust (outliers in the environment can cause large behavioral changes). This is as true now as it was before evolution invented intelligence and introspection. If people's behavior doesn't agree with the axiom system, the fault may not be with them, perhaps they know something the mathematician doesn't. Finally, the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game _once_, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not. Comment author: 17 December 2012 02:13:29AM 1 point [-] I was really confused about what point EY made that went over my head but i think I get it now. It totally changes the game to play it infinite amount of times rather than 1 go to win or lose. I made my choices based on 1 game and not a hybrid between the two of them played multiple times. If I play once, choosing 1a is just taking money that's already mine. If I play infinite times, 1b earns money faster because failing can be evened out. Comment author: 19 January 2008 02:28:10PM 1 point [-] tcpkac: no one is assuming away risk aversion. Choosing 1A and 2B is irrational regardless of your level of risk aversion. Comment author: 19 January 2008 03:52:31PM 0 points [-] Constant's response implies that if someone prefers 1A to 1B and 2B to 2A, when confronted with the money pump situation, the person will decide that after all, 1A is preferable to 1B and 2A is preferable to 2B. This is very strange but at least consistent. Comment author: 19 January 2008 04:15:17PM 1 point [-] "Nainodelac and Tarleton Nick", why are you using my (reversed) name? steven: not if you're nonlinearly risk averse. As many have suggested, what if you take a large one-time utility hit for taking any risk, but you're not averse beyond that? Comment author: 19 January 2008 04:15:41PM 1 point [-] Choosing 1A and 2B is irrational regardless of your level of risk aversion. No, only if the utility of avoiding risk is worth less than the money at risk. Duh. Comment author: 19 January 2008 04:22:49PM 4 points [-] Your description is not a money pump. A money pump occurs when you prefer A > B and B > C and C > A. Then someone can trade you in a round robin taking a little out for themselves each cycle. I don't feel like typing in an illustration, so see Robyn Dawes, Rational Choice in an Uncertain World. There is a significant difference between single and iterative situations. For a single play I would prefer 1A to 1B and 2B to 2A. If it were repeated, especially open-endedly, I would prefer 1B to 1A for its slightly greater expected payoff. This is analogous, I think, to the iterated versus one-time prisoner's dilemma, see Axelrod's Evolution of Cooperation for an interesting discussion of how they differ. Comment author: 19 January 2008 05:10:05PM 5 points [-] How trustworthy is the randomizer? I'd pick B in both situations if it seemed likely that the offer were trustworthy. But in many cases, I'd give some chance of foul play, and it's FAR easier for an opponent to weasel out of paying if there's an apparently-random part of the wager. Someone says "I'll pay you$24k", it's reasonably clear. They say "I'll pay you $27k unless these dice roll snake eyes" and I'm going to expect much worse odds than 35/36 that I'll actually get paid. So for 1A > 1B, this may be based on expectation of cheating. For 2A < 2B, both choices are roughly equally amenable to cheating, so you may as well maximize your expectation. It seems likely that this kind of thinking is unconscious in most people, and therefore gets applied in situations where it's not relevant (like where you CAN actually trust the probabilities). But it's not automatically irrational. Comment author: 19 January 2008 06:08:36PM 0 points [-] It seems to me that your argument relies on the utility of having a probability p of gaining x being equal to p times the utility of gaining x. It's not clear to me that this should be true. The trouble with the "money pump" argument is that the choice one makes may well depend on how one got into the situation of having the choice in the first place. For example, let's assume someone prefer 2B over 2A. It could be that if he were offered choice 1 "out of the blue" he would prefer 1A over 1B, yet if it were announced in advance that he would have a 2/3 chance of getting nothing and a 1/3 chance of being offered choice 1, he would decide beforehand that B is the better choice, and he would stick with that choice even if allowed to switch. This may seem odd, but I don't see why it's logically inconsistent. Comment author: 19 January 2008 06:16:54PM -1 points [-] No, only if the utility of avoiding risk is worth less than the money at risk. Duh. Someone did not read the OP carefully enough. Hint: re-read the definition of the Axiom of Independence. Comment author: 19 January 2008 06:41:06PM -2 points [-] Someone isn't thinking carefully enough. Hint: I did not assert that X is strictly preferred to Y. Comment author: 19 January 2008 07:23:01PM 1 point [-] Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility, and utility is a function of money that increases slower than linearly. When an agent doesn't maximize expected utility at all, that's something different. Comment author: 19 January 2008 07:29:32PM 0 points [-] Do you really want to say that it can be rational to accept a 1/3 chance of participating in a lottery, already knowing that if you got to participate you would change your mind? Risk aversion is (or at least, can be) a matter of taste, this is just a matter of not being stupid. Comment author: 19 January 2008 09:44:33PM 0 points [-] Dawes gives a very similar 2-gamble example of a money pump on pg 105 of Rational Choice. Comment author: 19 January 2008 09:50:29PM 0 points [-] Caledonian, Nick T: "Risk aversion" in the standard meaning is when an agent maximizes the expectation value of utility Oh, I agree. I just measure utility differently than you do. Comment author: 20 January 2008 12:29:28AM 0 points [-] Caledonian, if utility is any function defined on amounts of money, then if you are maximizing expected utility, you *cannot* fall prey to the Allais paradox. You can define a utility function on gambles that is *not* the expected value of a utility function on amounts of money, but then that function is not *expected* utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP. Comment author: 20 January 2008 01:40:58AM 0 points [-] you're violating rationality axioms like the one Eliezer gave in the OP No. Those axioms are "if => then" statements. I'm violating the "if" part. Comment author: 20 January 2008 02:32:51AM 5 points [-] Nainodelac, if you prefer 1A to 1B and 2A to 2B, as you should if you need exactly$24,000 to save your life, that is a perfectly consistent preference pattern.

Comment author: 20 January 2008 03:32:28AM 2 points [-]

You can define a utility function on gambles that is *not* the expected value of a utility function on amounts of money, but then that function is not *expected* utility, and you're outside of normal models of risk aversion, and you're violating rationality axioms like the one Eliezer gave in the OP.

Having a utility function determined by anything other than amounts of money is irrational? WTF?

Comment author: 20 January 2008 03:42:33AM 2 points [-]

Upon rereading the thread and all of its comments, I suspect the person I originally quoted meant something along the lines of "preferring 1A to 1B but 2B to 2A is irrational", which seems more defensible.

There is nothing irrational about preferring 1A and 2B by themselves, it's choosing the first option in the first scenario and the second in the second that's dodgy.

Comment author: 20 January 2008 03:42:35AM 0 points [-]

Nick is right to object, but removing the phrase "on amounts of money" makes the statement unobjectionable -- and relevant and true.

Comment author: 20 January 2008 04:59:09AM 1 point [-]

Is Pascal's Mugging the reductio ad absurdum of expected value?

Comment author: 20 January 2008 05:29:10AM 2 points [-]

This may be related to the phenomenon of overconfident probability estimates. I would not be surprised to find that people who claim a 97% certainty have a real 90% probability of being right. Maybe someone who hears there's 1 chance in 34 of winning nothing interprets that as coming from an overconfident estimator whereas the 34% and 33% probabilities are taken at face value.

On the other hand, the overconfidence detector seems to stop working when faced with asserted certainty.

Comment author: 20 January 2008 05:34:48AM 1 point [-]

"Nainodelac and Tarleton Nick": This is not about risk aversion. I agree that if it is vital to gain at least $20,000, 1A is a superior choice to 1B. However, in that case, 2A is also a superior choice to 2B. The error is not in preferring 1A, but in simultaneously preferring 1A and 2B. Comment author: 20 January 2008 06:05:43AM 1 point [-] Is Pascal's Mugging the reductio ad absurdum of expected value? No. I thought it might be! But Robin gave an excellent reason of why we should genuinely penalize the probability by a proportional amount, dragging the expected value back down to negligibility. (This may be the first time that I have presented an FAI question that stumped me, and it was solved by an economist. Which is actually a very encouraging sign.) Comment author: 20 January 2008 06:23:03AM 0 points [-] This discussion reminded me of the Torture vs. Dust Specks discussion; i.e. in that discussion, many comments, perhaps a majority, amounted to "I feel like choosing Dust Specks, so that's what I choose, and I don't care about anything else." In the same way, there is a perfectly consistent utility function that can prefer A1 to B1 and B2 to B1, namely one that sets utility on "feeling that I have made the right choice", and which does not set utility on money or anything else. Both in this case and in the case of the Torture and Dust Specks, many comments indicate a utility function which places value on the feeling of having made a right choice, without regard for anything else, especially for whether or not the choice was actually right, or for the consequences of the choice. Comment author: 20 January 2008 09:17:00AM 0 points [-] Not sure if anyone pointed this out, but in a situation where you don't trust the organizer, the proper execution of 1A is a lot easier to verify than the proper execution of 1B, 2A and 2B. 1A minimizes your risk of being fooled by some hidden cleverness or violation of the contract. In 1B, 2A and 2B, if you lose, you have to verify that the random number generator is truly random. This can be extremely costly. In option 1A, verification consists of checking your bank account and seeing that you gained$24,000. Straightforward and simple. Hardly any risk of being deceived.

Comment author: 20 January 2008 05:30:00PM 1 point [-]

I hate to discuss this again, but...

Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans.

Comment author: 20 January 2008 06:10:00PM 0 points [-]

It's simple to show that no rational person would *actually* give money to a Pascal mugger, as the next mugger might threaten 4^^^4 people. I'm not sure whether this solves the problem or just sweeps it under the rug, though.

Comment author: 20 January 2008 10:27:00PM 0 points [-]

Well, if Pascal's Mugging doesn't do it, how about the St. Petersburg paradox? ;)

Oh wait... infinite set atheist... never mind.

Comment author: 20 January 2008 10:37:00PM 0 points [-]

I'm afraid I don't follow the maths involved, but I'd like to know whether the equations work out differently if you take this premise:

- Since 1A offers a certainty of $24,000, it is deemed to be immediately in your possession. 1B then becomes a 33/34 chance of winning$3,000 and 1/34 chance of losing $24,000. Can someone tell me how this works out mathematically, and how it then compares to 2B? Comment author: 21 January 2008 01:53:00PM 0 points [-] The Allais Paradox is indeed quite puzzling. Here are my thoughts: 0. Some commenters simply dismiss Bayesian reasoning. This doesn't solve the problem, it just strips us of any mathematical way to analyze the problem. On the other hand, the fact that the inconsistent choice seems ok does mean that the Bayesian way is missing something. Simply dismissing the inconsistent choice doesn't solve the problem either. 1. If I understand correctly, you argue that situation 1 can be turned into situation 2 by randomization. In other words, if you sell me situation 1, I can sell somebody else (named X) situation 2 by throwing some dies and using your offer. More specifically, I throw a 100-sided die. If it's > 34, X looses. Otherwise, I play X's option with you. However, this can't be reversed. Given only situation 2, I can't sell situation 1, assuming I have only$0 initial capital.

Hence, it seems that assuming invertibility of situations (I can both buy and sell them) and unlimited money buffers for that purpose are important for the demanded consistency.

Comment author: 21 January 2008 04:08:00PM 0 points [-]

Nick,

"Is Michael Vassar's variant Pascal's Mugging (with the pigs), bypassing as it does Robin's objection, the reductio of expected value? If you don't care about pigs, substitute something else really really bad that doesn't require creating 3^^^3 humans."
The Porcine Mugging doesn't bypass the objection. Your estimates of the frequency of simulated people and pigs should be commensurably vast, and it is vastly unlikely that your simulation (out of many with intelligent beings) will be selected for an actual Porcine Mugging that will consume vast resources (enough to simulate vast numbers of humans). These things offset to get you workable calculations.

Comment author: 24 January 2008 02:21:00PM 1 point [-]

I would have chosen 1A and 2B, for the following reasons: Any sum of the order of $20,000 would revolutionize my personal circumstances. The likely payoff is enormous. Therefore, I'd pick 1A because I'd get such a sum guaranteed, rather than run the 3% risk (1B) of getting nothing at all. Whereas choice 2 is a gamble either way, so I am led to treat both options as qualitatively the same. But that's a mistake: if the value of getting either nonzero payoff at all is so great, then I should have favored the 34% chance of winning something over the 33% chance, just as I favored the 100% chance over the ~97% chance in choice 1. Interesting. Comment author: 24 January 2008 04:13:00PM 0 points [-] Surely the answer is dependednat on goal criterion. If the goal is to get 'some' money then the 100% option and the 34% options are better. If your goal is get 'the most' money then the 97% and the 33% options are better. However the goal might be socially construictued. This reminded me of John Nash whom offered one of his sectraries$15 dollars if she shared it equally with a co-worker but $10 if she kept it for her-self. She took the$15 and split it with her co-worker. She chose an option that maximised her social capital but was a weaker one economically.

Comment author: 07 September 2008 06:49:00PM 0 points [-]

I agree with Dagon.

This experiment assumes that the subjective probabilities of participants were identical to the stated probabilities. In reality, I feel like people are probably wary of stated probabilities due to experiences with or fears of shysters and conmen. That, is if asked to choose between 1A and 1B, 1B offers the possibility that the randomising mechanism' that the experimenter is offering is in fact rigged.

Even if the experimenter is completely honest in their statement of their own subjective probabilities, they may simply disagree with that of the participants. Whatever randomising mechanism' is suggested is, of course, almost certainly completely predictable given sufficient information - a die roll, or similar, predictable using Newtonian mechanics. That, is the experimenter's stated probability is purely a reflection of their own information concerning that mechanism, which may be completely at odds with the participant's knowledge.

Comment author: 01 February 2009 10:52:00PM 7 points [-]

Eliezer, I see from this example that the Axiom of Independence is related to the notion of dynamic consistency. But, the logical implication goes only one way. That is, the Axiom of Independence implies dynamic consistency, but not vice versa. If we were to replace the Axiom of Independence with some sort of Axiom of Dynamic Consistency, we would no longer be able to derive expected utility theory. (Similarly with dutch book/money pump arguments, there are many ways to avoid them besides being an expected utility maximizer.)

I'm afraid that the Axiom of Independence cannot really be justified as a basic principle of rationality. Von Neumann and Morgenstern probably came up with it because it was mathematically necessary to derive Expected Utility Theory, then they and others tried to justify it afterward because Expected Utility turned out to be such an elegant and useful idea. Has anyone seen Independence proposed as a principle of rationality prior to the invention of Expected Utility Theory?

Comment author: 05 March 2011 10:16:46PM 1 point [-]

I'm equally afraid ;). The Axiom of Independence is intuitively appealing to me, but I don't posit it to be a basic principle of rationality, because that smells like a mind projection fallacy. I suspect you're right, also, about dutch book/money pump arguments.

I tentatively conclude that a rational agent need not evince preferences that can be represented as an attempt to maximize such a utility function. That doesn't mean Expected Utility Theory can't be useful in many circumstances or for many agents, but this still seems like important news, which merits more discussion on Less Wrong.

Comment author: 06 March 2011 10:33:36AM 2 points [-]

which merits more discussion on Less Wrong.

Comment author: 04 May 2009 11:18:00AM 2 points [-]

Agree with Denis. It seems rather objectionable to describle such behaviour as irrational. Humans may well not trust the experimenter to present the facts of the situation to them accurately. If the experimenter's dice are loaded, choosing 1A and 2B could well be perfectly rational.

Comment author: 04 May 2009 12:03:00PM 0 points [-]

"That is, the Axiom of Independence implies dynamic consistency, but not vice versa."
Really? A hyperbolic discounter can conform to the Axiom of Independence at any particular time and be dynamically inconsistent.

Comment author: 25 August 2010 04:59:43PM 1 point [-]

I would love to know if the results are different if you repeatedly expose people to the situation rather than communicate it in a formal way. They are likely to observe the outcomes of their strategy and adapt. Perhaps what is being measured is simply the numeracy of the subjects and not their practical inability to determine optimal strategies.

The lottery is another interesting example, what is being bought is the probability of a big win, not a statistically optimal investment. Playing the lottery genuinely increases the chance of you suddenly gaining a life changing amount of money. This is a perfectly rational choice.

Comment author: 25 August 2010 06:13:39PM 1 point [-]

This is a perfectly rational choice.

What about the Allais paradox? Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.) Do you want to say that such a person is 'perfectly rational'? Would you call them perfectly rational if they accepted both gambles (despite both of them having negative EV)?

To be fair, It is possible to tell a consistent story about a person for whom either gamble would be rational: Perhaps the Earth is going to be destroyed soon and the cost of entry into the new self-sustaining Mars colony equals the lottery jackpot.

But needless to say, most people aren't in situations remotely resembling this one.

Comment author: 26 August 2010 10:00:00AM 0 points [-]

I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational.

Although, I feel that the likely answer is that the brain is optimised for rapid responses to survival problems and these solutions may well be an optimal response given constraints on both processing and expected outcome.

Another perspective is that in general specifications are not accurate but instead a communication of experience. If the problem specification is viewed instead as a measurement of a system where the placing of bets is an input and the output is not random but the outcome of an unknown set of interactions. Systems encountered in the past will form a probability distribution over their behaviour, the frequency of observed consequences then act as a measurement of the likelihood that the system in question is equivalent to one of these types. This would explain the feeling of switching between the two examples (they constitute the likely outcomes of two types of system) and thus represent situations where distinct behaviours were appropriate.

I.e. as one starts to understand an existing system one gets diminishing returns for optimising interaction with it (a good example is AI programming itself), however systems may be unknown to the user. These unknown systems may demonstrate rare, but highly beneficial or unexpected events, like noticing an anomaly in a physics experiment. In this case it is rational to play/interact as doing so provides more information which may be used to identify the system and thus lead to understanding and thus an expected benefit in the future.

Comment author: 26 August 2010 10:44:27AM 1 point [-]

I think the Allais paradox is fascinating, however, although it is very revealing about our likely motives for playing the lottery it doesn't change the potential rationality of actual playing it. I.e. that money and value don't necessarily have a linear relationship, and so optimising for EV is not rational.

Of course, that just means you maximise expected utility rather than expected money. (I was almost going to write "expected value" instead of "expected utility" as you used the word "value", but obviously that would be confusing in this context...)

Comment author: 26 August 2010 12:55:20PM 0 points [-]

Yes, absolutely, apologies for my unfamiliarity with the terms.

The point I'm trying to make is that lottery playing optimises utility (assuming utility means what is considered valuable to the person). Saying that lottery playing is irrational is making a statement about what is valuable more that it does about what is reasonable.

Comment author: 14 November 2010 12:33:52PM 0 points [-]

Imagine someone who is happy to play the lottery but would refuse to play an alternative version where the ticket merely confers a slight increase on a significant pre-existing probability of winning 'life changing money'. (As I understand it, most/all lottery players would in fact refuse the 'alternative' gamble.)

This is likely because playing the lottery gives you "hope" of a life-changing event. It means that you KNOW there is a possible life-changing event available.

If you already have that knowledge, then paying for the lottery becomes just about the money; which isn't worthwhile. If you don't, paying for the lottery is buying that knowledge, and the knowledge has value to you.

Comment author: 14 November 2010 12:26:57PM 1 point [-]

Ummm, no. The money pump fails because of the REASON for the preference difference.

The reason is, as some have already stated, that in scenario 1B if you lose you know it's your fault you got nothing. In scenario 2B if you lose, you can rationalise it easily as "Would have lost anyway"

In your money pump scenario, we have a 1/3rd chance of playing 1. If we get to play 1, we know we're playing 1. So your money pump fails, because a standard player would prefer that the switch be on A at all times.

Comment author: 07 December 2010 05:59:55PM 2 points [-]

How do I alleviate feeling pleased at myself for having read the statement of the paradox - that people preferred 1A>1B but 2B>2A - and immediately going "WHAT?" and boggling at the screen and pulling confused faces for about thirty seconds, so flabbergasted I had to reread that this choice pattern was common?

(Personally I'm really strongly biased these days toward a bird in the hand and would have chosen 1A and 2A every time. I occasionally do bits of sysadmin for dodgy dot-coms that friends are working for. There are people who offer equity; I take an hourly fee. "No, no, that's fine, I am but humble roadie." This may not always be the best life strategy, but it seems to work for me at present.)

Comment author: 07 December 2010 06:20:16PM *  1 point [-]

There are people who offer equity; I take an hourly fee.

Penalise expected value of equity because probability is lower than I have been led to believe - an incredibly useful heuristic.

How do I alleviate feeling pleased at myself

In 33/34ths of the worlds where you make choice A in 1, you are mercilessly teased and mocked by your inferiors, a la this, thirty seconds in, for not picking B. Assuming counterfactual outcomes are revealed.

Comment author: 07 December 2010 06:23:28PM 1 point [-]

I'll just have to cry myself to sleep on a big bed made of $24,000! Comment author: 12 April 2011 05:41:19PM 9 points [-] It took me 30 minutes of sitting down and doing math before I could finally accept that 1A+2B was an irrational preference. I finally realized that a lot of it came down to: with a 66% vs 67% chance of losing, I could take the riskier option and not feel as bad, because I could sweep it under the rug with "oh, I probably would have lost anyways." Once I ran a scenario where I'd KNOW whether it was that 1% that I controlled, or the 66% that I didn't control, that comfort evaporated. I learned a lot about myself by working through this exercise, so thank you very much :) Comment author: 26 May 2011 03:15:27PM 0 points [-] The problem as stated is hypothetical: there is next to no context, and it is assumed that the utility scales with the monetary reward. Once you confront real people with this offer, the context expands, and the analysis of the hypothetical situation falls short of being an adequate representation of reality, not necessarily because of a fault of the real people. Many real people use a strategy of "don't gamble with money you cannot afford to lose"; this is overall a pretty successful strategy (and if I was looking to make some money, my mark would be the person who likes to take risks - just make him subsequently better offers until he eventually loses, and if he doesn't, hit him over the head, take the now substantial amount of money and run). To abandon this strategy just because in this one case it looks as if it is somewhat less profitable might not be effective in the long run. (In other circumstances, people on this site talk about self-modification to counter some expected situations as one-boxing vs. dual-boxing; can we consider this strategy such a self-modification?) Another useful real-life strategy is, "stay away from stuff you don't understand" -$24,000 free and clear is easier to grasp than the other offer, so that strategy favors 1A as well, and doesn't apply to 2A vs. 2B because they're equally hard to understand. The framing of offer two also suggests that the two offers might be compared by multiplying percentage and values, while offer 1 has no such suggestion in branch 1A.

We're looking at a hypothetical situation, analysed for an ideal agent with no past and no future - I'm not surprised the real world is more complex than that.

Comment author: 26 May 2011 04:11:10PM 0 points [-]

The problem is not with the hypothetical. It is with the intuition. Intuitions which really do prompt bad decisions in the real life circumstances along these lines.

Comment author: 27 May 2011 02:11:40AM 0 points [-]

You seem to have examples in mind?

Comment author: 27 May 2011 02:22:24AM 1 point [-]

The lottery comes immediately to mind. You can't be absolutely sure that you'll lose.

Comment author: [deleted] 26 May 2011 04:17:53PM *  4 points [-]

it is assumed that the utility scales with the monetary reward.

Not necessarily. It is assumed that receiving $24000 is equally good in either situation. Your utility function can ignore money entirely (in which case 1A<1B and 2B>2A is irrational because you should be indifferent in both cases). You can use the utility function which prefers not to receive monetary rewards divisible by 9: in this case, 1A>1B and 2A>2B is your best bet, giving you 100% and 34% chances to avoid 9s, rather than 0% chances. In general, your utility function can have arbitrary preferences on A and B separately; but no matter what, it will prefer 1A to 1B if and only if it prefers 2A to 2B. As for the rest of your reply -- yes, it is true that real people use strategies ("heuristic" is the word used in the original post) that lead them to choose 1A and 2B. That's sort of why it's a paradox, after all. However, these strategies, which work well in most cases, aren't necessarily the best in all cases. The math shows that. What the math doesn't tell us is which case is wrong. My own judgment, for this particular sum of money (which is high relative to my current income), is that choice 1A is correctly better than choice 2A, in order to avoid risk. However, choice 1B is also better than choice 2B, upon reflection, even though my intuitions tell me to go with 2B. This is because my intuitions aren't distinguishing 33% and 34% correctly. In reality, faced with the opportunity to earn amounts on the order of$20K, I should maximize my chances to walk away with something. In the first case, I can maximize them fully, to 100%, which triggers my "success!" instinct or whatever: I know I've done everything I can because I'm certain to get lots of money. In the second case, I don't get any satisfaction from the correct decision, because all I've done is improve my chances by 1%.

In general, the heuristic that 1% chances are nearly worthless is correct, no matter what's at stake: I can usually do better by working on something that will give me a 10% or 25% chance. In this case, this heuristic should be ignored, because there is no effort spent making the improvement, and furthermore, there isn't really anything else I can do.

On the other hand, suppose that the amount of money at stake is $2.40 or$2.70. Suddenly, our risk-aversion heuristic is no longer being triggered at all (unless you're really strapped for cash), and we have no problem doing the utility calculation. Here, 1A<1B and 2A<2B is the correct choice.

Comment author: 27 May 2011 02:10:10AM *  0 points [-]

The utility function has as its input only the monetary reward in this particular instance. Your idea that risk-avoidance can have utility (or that 1% chances are useless) cannot be modelled with the set of equations given to analyse the situation (the percentage is no input to the U() function) - the model falls short because the utility attaches only to the money and nothing else. (Another example of a group of individuals for whom the risk might out-utilize the reward are gambling addicts.) Security is, all other things being equal, preferred over insecurity, and we could probably devise some experimental setup to translate this into a utility money equivalent (i.e. how much is the test subject prepared to pay for security and predictability? that is the margin of insurance companies, btw). :-P

I wanted to suggest that a real-life utility function ought to consider even more: not just to the single case, but the strategies used in this case - do these strategies or heuristics have better utility in my life than trying to figure out the best possible action for each problem? In that case, an optimal strategy may well be suboptimal in some cases, but work well re: a realistic lifetime filled with probable events, even if you don't contrive a $24000 life-or-death operation. (Should I spend two years of my life studying more statistics, or work on my father's farm? The farm might profit me more in the long run, even if I would miss out if somebody made me the 1A/1B offer, which is very unlikely, making that strategy the rational one in the larger context, though it appears irrational in the smaller one.) Comment author: [deleted] 27 May 2011 06:34:08PM * 1 point [-] Risk-avoidance is captured in the assignment of U($X). If the risk of not getting any money worries you disproportionately, that means that the difference U($24K) - U($0) is higher than 8 times the difference U($27K) - U($24K).

Comment author: 27 May 2011 09:30:56PM *  0 points [-]

That's a neat trick, however, I am not sure I understand you correctly. You seem to be saying that risk-avoidance does not explain the 1A/2B preference, because you say your assignment captures risk-avoidance, and it doesn't lead to that. (It does lead to your take of the term though - your preference isn't 1A/2B, though).

Your assignment looks like "diminishing utility", i.e. a utility function where the utility scales up subproprotionally with money (e.g. twice the money must have less than twice the utility). Do you think diminishing utility is equivalent to risk-avoidance? And if yes, can you explain why?

Comment author: [deleted] 27 May 2011 10:31:04PM *  0 points [-]

I think so, but your question forces me to think about it harder. When I thought about it initially, I did come to that conclusion -- for myself, at least.

[I realized that the math I wrote here was wrong. I'm going to try to revise it. In the meantime, another question. Do you think that risk avoidance can be modeled by assigning an additional utility to certainty, and if so, what would that utility depend on?]

Also, thinking about the paradox more, I've realized that my intuition about probabilities relies significantly on my experience playing the board game Settlers of Catan. Are you familiar with it?

Comment author: 28 May 2011 11:05:59AM *  0 points [-]

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise. I doubt that this is a useful model of reality, but mathematically, it would do the trick. My stated opinion is that this special case should be looked at in the light of more general startegies/heuristics applied over a variety of situations, and this approach would still fall short of that.

I know Settlers of Catan, and own it. It's been awhile since I last played it, though.

Your point about games made me aware of a crucial difference between real life and games, or other abstract problems of chance: in the latter, chances are always known without error, because we set the game (or problem) up to have certain chances. In real life, we predict events either via causality (100% chance, no guesswork involved, unless things come into play we forgot to consider), or via experience / statistics, and that involves guesswork and margins of error. If there's a prediction with a 100% chance, there is usually a causal relationship at the bottom of it; with a chance less than 100%, there is no such causal chain; there must be some factor that can thwart the favorable outcome; and there is a chance that this factor has been assessed wrong, and that there may be other factors that were overlooked. Worst case, a 33/34 chance might actually only be 30/34 or less, and then I'd be worse off taking the chance. Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.

Comment author: [deleted] 29 May 2011 02:02:43PM *  1 point [-]

One way to do it to get to the desired outcome is to replace U(x) with U(x,p) (with x being the money reward and p the probability to get it), and define U(x,p)=2x if p=1 and U(x,p)=x, otherwise.

The problem with this is that dealing with p=1 is iffy. Ideally, our certainty response would be triggered, if not as strongly, when dealing with 99.99% certainty -- for one thing, because we can only ever be, say, 99.99% certain that we read p=1 correctly and it wasn't actually p=.1 or something! Ideally, we'd have a decaying factor of some sort that depends on the probabilities being close to 1 or 0.

The reason I asked is that it's very possible that a correct model of "attaching a utility to certainty" would be equivalent to a model with diminishing utility of money. If that were the case, we would be arguing over nothing. If not, we'd at least stand a chance of formulating gambles clarifying our intuitions if we knew what the alternatives are.

Comparing a .33 with a .34 chance makes me think that there's gotta be a lot of guesswork involved, and that, with error margins and confidence intervals and such, there's usually a sizeable chance that the underlying probabilities might be equal or reversed, so going for the higher reward makes sense.

If the 33% and 34% chances are in the middle of their error margins, which they should be, our uncertainty about the chances cancels out and the expected utility is still the same. Going for the higher expected value makes sense.

I brought up Settlers of Catan because, if I imagine a tile on the board with $24K and 34 dots under it, and another tile with$27K and 33 dots, suddenly I feel a lot better about comparing the probabilities. :) Does this help you, or am I atypical in this way?

Imagine you are a mathematical advisor to a king who asks you to advise him of a course of action and to predict the outcome.

Obviously with the advisor situation, you have to take your advisee's biases into account. The one most relevant to risk avoidance is, I think, the status quo bias: rather than taking into account the utility of the outcomes in general, the king might be angry at you if the utility becomes worse, and not as picky if the utility becomes better (than it is now). You have to take your own utility into account, which depends not on the outcome but on your king's satisfaction with it.

Comment author: 26 August 2011 11:39:18AM 3 points [-]

I wonder how the results would change if the experiment changes so that the outcomes of 2B are, "You have a 33% chance of receiving $27k, a 66% chance of not getting anything, and a 1% chance of having someone laugh in your face for not picking 2A" Comment author: 13 December 2011 10:01:45AM * 0 points [-] If you'd ask any person capable of doing the math whether they would want to play 1A or 1B a thousand times you'd probably get a different answer, but not an answer that's more correct. Also the utility value of money is not directly relative to the amount of money. Imagine that you would need a 1000$ dollars of money to save your dying relative with certainty by paying for his/her treatment. Good enough for explaining 1A > 1B, but doesn't resolve the contradiction with 2B > 2A.

But even a more revealing edit is based exactly onto the certainty. If you would be presented with these two questions, in such a fashion that you would get the money and get to know the result in 1 month after being presented with it. By selecting 1A you would have 0% chance that the plans you make would fail, and with 1B you would have a 1/34 chance that they would fail. Meanwhile regardless of whether you select 2A or 2B you will have to face uncertainty. So you would be frustrated while trying to make plans that are conditionally dependent with you getting the money.

As these conditions are not present in the presentation it's possible to rule these kind of instinctive judgments as flawed, but as it turns out, they're not foolish, on a general level. You could even make a claim that it's costly to perform the calculation that tells you whether the assurance is worth it - but of course instead of saying that you should just figure out how much value this assurance has in each given situation.

Comment author: 13 December 2011 12:37:56PM 3 points [-]

You're right that certainty helps out with planning, and so certainty can be valuable sometimes. It's still a bias to unconsciously add in a value for certainty if you don't need it in this case, even if it sometimes pays off, and so it's worth thinking through the 'paradox.'

Comment author: 13 December 2011 06:30:38PM 0 points [-]

I wanted to point out that this flaw is not a foolish flaw. That's how we create plans, we project and create expectations, and the anticipated feeling of loss is frustrating to plan for. In a theoretical example you might make a bad decision, but isn't it also that this flaw causes you to make good decisions in actual real-world situations? Since they don't tend to occur in such theoretical forms where you have all the required information available and which lack context.

If you'd actually encounter this problem in a real-world situation, you might end up making a bad decision because of handling it with a too theoretical approach - what if I told you get to play both games and actually get to choose between both, when you come to visit me? But you didn't have money to pay for the ticket to fly over? What if you took a loan? And without the certainty of A1 you might end up in a bad situation where you'll lack the means to pay back your loan - in other words a decision making agent with this flaw handles the situation well. But of course you can take all that into account. And as it's a problem dealing with rationality, I think it's pretty important to note these things.

Anyway I agree with you, Vaniver =)

Comment author: 29 December 2011 03:46:04PM *  2 points [-]

Please correct me if any of my assumptions are innacurate, and I apologize if this comment comes off as completely tautological.

Expected utility is explicity defined as the statistic

$\sum_{{x}\in{X}}{p(x)U(x)}$

where X is the set of all possible outcomes associated with a particular gamble, p(x) is the proportion of times that outcome x occurs within the gamble, and U(x) is the utility of outcome x, a function that must be strictly increasing with respect to the monetary value of outcome x.

To reduce ambiguity:

• 1A, 1B, 2A, and 2B are instances of gambles.

• For 1B, the possible outcomes are $27000 and$0.

• For 1B, the expected utility is p($27000) * U($27000) + p($0) * U($0) = 33/34 * U($27000) + 1/34 * U($0).

If you choose 1A over 1B and 2B over 2A, what can we conclude?

• that you are not using the rule "maximize expected utility" to make your decisions. Thus you do not fit the definition, as given by the Axiom of Independence, of consistent decision making.

If you choose 1A over 1B and 2B over 2A, what can we not conclude?

• that your decision rule changes arbitrarily. You could, for example, always follow the rule, "Maximize minimum net utility. In the case of a tie, maximize expected utility." In this case, you would choose 1A and 2B.

• that you would be wrong or stupid for using a different decision rule when you only get to play one time, than the rule you would use when you get to play 100 times.

Comment author: 29 December 2011 08:08:32PM 0 points [-]

That all seems pretty uncontroversial.

Comment author: 15 January 2012 07:28:45PM *  0 points [-]

I initially chose 1A and 2B, but after reading the analysis of those decisions, I agree that they are inconsistent in a way that implies that one choice was irrational (in the context of this silly little game). So I did some introspection to figure out where I went wrong. Here's what I found:

1) I may have misjudged how small 1/34 is, and this only became apparent when the question was phased as it is in example 2.

2) I think I assumed an implicit costs in these gambles. The first cost is a delay in learning the outcome of these gambles; the second is the implicit need to work to earn this money. I think that these assumptions are reasonable because there is essentially no realistic condition in which I would instantly see the results of a decision that might earn me $27,000; there would probably be a delay of several months (if working) or years (if investing) between making the decision and learning whether I got the money or not. This prolonged uncertainty has a negative utility, since I am unable to make firm plans for the money during that interval. This negative utility would apply to all options except 1A. Furthermore, earning$24,000 would realistically require several months of work on my part. However, a project that had a 1/3 chance of paying out $24,000 might only take a month. The implicit difference in opportunity cost between scenario 1 and scenario 2 has implications for the marginal utility of money in each scenario (making me more risk-averse in scenario 1, which implicitly has a higher opportunity cost). These implicit costs are not specified in this game, so it is technically "irrational" to incorporate them into my decision-making. However, in any realistic scenario, such costs will exist (regardless of what the salesman says), so it is good that I/we intuitively include them in my/our decision-making. Comment author: 18 April 2012 12:40:42AM * 0 points [-] While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem. The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing". That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).

This makes the inequations compatible:

U($24,000) > 33/34 U($27,000) + 1/34 U1($0)  e.g. 24 > 33/34 · 27 + 1/34 · -1000 0.34 U($24,000) + 0.66 U2($0) < 0.33 U($27,000) + 0.67 U2($0)  e.g. 0.34 · 24 + 0.66 · 0 < 0.33 · 27 + 0.67 · 0 Note that stating the game with the "switch" rule turns game 2 into one (let's call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump. This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times. Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won't get away not knowing that "it was your fault" if you loose: 4A. If a 100-face dice falls on <=34, win$24,000, otherwise win nothing.
4B. If a 100-face dice falls on <=33, win $27,000, otherwise win nothing.  Just to prevent the subject being pattern-matching and not thinking, we should add the phrase "note that if the dice falls on a 34 and you've chosen A, you win 24k, but if you've chosen B, you get nothing". I believe game 4 is pretty equivalent to game 3 (the one with the switch). I've checked Allais' document and it suffers the same flaw: it's not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn't force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to "chase the rabbit"). It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions -- since that's not what's being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway). Comment author: 18 April 2012 01:28:34AM 2 points [-] That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).

If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?

Comment author: 01 May 2012 05:53:49AM 3 points [-]

I know this was posted 4 years ago, but I had a thought. If I was offered a certainty of $24,000 vs a 33/34 chance of$27,000, my preference would depend on whether this was a once-off. If this was a once-off, my primary concern would be securing the money and being able to put food on the table tonight. Option 1 will put food on the table with 100% certainty, while Option 2 will not.

If, however, the option was to be offered many times, I would optimise for greatest return - Option 2. If I miss out this month, I'll just scrape for food until next month, when chance are I'll get the money.

I think I just answered my own question. If my goal can be reached with $24,000, then Option 1 is the best one because it reaches the goal in one guaranteed fell swoop. However, if my goal is to make lots of money, then Option 2 is the way to go, because it makes the most over time. That make sense to anyone? Comment author: 01 May 2012 06:22:16AM 4 points [-] It absolutely can make sense to prefer option 1A over option 1B (which I think is what you mean). What does not make sense is to prefer option 1A over 1B, AND prefer 2B over 2A. It's worth reading the two followup articles before you get into this further: Zut Allais and Allaise Malaise. Welcome to Less Wrong! Comment author: 01 May 2012 07:29:10AM * 0 points [-] This is an old post, but I guess one resolution is that: U($24,000)   >   33/34 U($27,000) + 1/34 U($0 & Regret that I didn't take the $24000) Which is consistent with: 0.34 U($24,000) + 0.66 U($0) < 0.33 U($27,000) + 0.67 U(\$0)

It's an interesting psychological fact that the regret is triggered in one case, but not the other.

Comment author: 29 August 2012 10:52:20PM 0 points [-]

I wonder if this bias is somehow trying to compensate for some other bias. Suppose you think the experimenter is overconfident, i.e., their log-odds are twice as much as they should; so, when they say 100% they do mean 100%, but when they say 97.1% they actually mean 85.2% (and when they say 34% they mean 41.8%, and when they say 33% they mean 41.2%). Now, Option 1B suddenly looks much uglier, doesn't it? (I'm not claiming this happens consciously.)

Comment author: 15 October 2012 06:58:56PM *  0 points [-]

If flipping the switch before 12:00 pm has no effect on the amount of money one acquires why would one pay anything to do it? why not just flip the switch only once after 12:00 pm and before 12:05PM?

Comment author: 02 March 2013 10:25:33PM 0 points [-]

Question: do the rest of you actually find the choice of 1A clearly intuitive?

I think my intuition for examples like this has been safely killed off, so my replacement intuition instead says: "hm, clearly 34*(27-24) > 27, so 1B!" (without actually evaluating 27-24, just noting it's ≥1). Which mainly suggests that I've grown accustomed to calculating expectations out explicitly where they're obvious, not that I'm necessarily good at avoiding real life analogues of the problem.

Comment author: 07 March 2013 12:32:33AM *  1 point [-]

do the rest of you actually find the choice of 1A clearly intuitive?

I chose 1B. I seem to be an outlier in that I chose 1B and 2B and did no arithmetic.