While Elezier's argument is still correct (that you should multiply to make decisions based on probabilistic knowledge), I see a perfectly rational and utilitarian explanation for choosing 1A and 2B in the stated problem.
The clue lies in Colin Reid's comment: "people do not ascribe a low positive utility to winning nothing or close to nothing - they actively fear it". This fear is explained by Kingreaper: "in scenario 1B if you lose you know it's your fault you got nothing".
That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).
This makes the inequations compatible:
U($24,000) > 33/34 U($27,000) + 1/34 U1($0)
e.g. 24 > 33/34 · 27 + 1/34 · -1000
0.34 U($24,000) + 0.66 U2($0) < 0.33 U($27,000) + 0.67 U2($0)
e.g. 0.34 · 24 + 0.66 · 0 < 0.33 · 27 + 0.67 · 0
Note that stating the game with the "switch" rule turns game 2 into one (let's call it 3) in which the guilt/shame reappears, making U3=U1 -- so a rational player with the described negative U1 would choose A in game 3 and there would be no money pump.
This solution to the paradox is less valid if it is made clear that the subject will be allowed to play the game many times.
Another interesting way to remove this as a possible solution would be to restate case 2 in more concrete terms, to make it clear that you won't get away not knowing that "it was your fault" if you loose:
4A. If a 100-face dice falls on <=34, win $24,000, otherwise win nothing.
4B. If a 100-face dice falls on <=33, win $27,000, otherwise win nothing.
Just to prevent the subject being pattern-matching and not thinking, we should add the phrase "note that if the dice falls on a 34 and you've chosen A, you win 24k, but if you've chosen B, you get nothing".
I believe game 4 is pretty equivalent to game 3 (the one with the switch).
I've checked Allais' document and it suffers the same flaw: it's not an actual experiment in which people are asked to choose A or B and actually allowed to play the game, but a questionnaire asking subjects what they would choose. This is not the same, among other reasons because it doesn't force the experimenter or subject to detail the mechanics of the game (and hence it is not stated whether the subject will be given that sense of shame or even allowed to "chase the rabbit").
It would be interesting to know the result of an actual experiment with this design, possibly with smaller figures to reduce the non-linearity of the utility functions -- since that's not what's being discussed here --, and with subjects filtered against innumeracy (since those are out of hope anyway).
That makes the two cases, stated as they are, different. In game 1 the utility of U1($0) has negative value: a sense of guilt (or shame) over having made the bad choice, which doesn't seem possible in game 2 (because game 2 is stated in terms of abstract probabilities, see below).
If you could choose whether or not to have this guilt, would you choose to have it? Does it make you better off?
Choose between the following two options:
Which seems more intuitively appealing? And which one would you choose in real life?
Now which of these two options would you intuitively prefer, and which would you choose in real life?
The Allais Paradox - as Allais called it, though it's not really a paradox - was one of the first conflicts between decision theory and human reasoning to be experimentally exposed, in 1953. I've modified it slightly for ease of math, but the essential problem is the same: Most people prefer 1A > 1B, and most people prefer 2B > 2A. Indeed, in within-subject comparisons, a majority of subjects express both preferences simultaneously.
This is a problem because the 2s are equal to a one-third chance of playing the 1s. That is, 2A is equivalent to playing gamble 1A with 34% probability, and 2B is equivalent to playing 1B with 34% probability.
Among the axioms used to prove that "consistent" decisionmakers can be viewed as maximizing expected utility, is the Axiom of Independence: If X is strictly preferred to Y, then a probability P of X and (1 - P) of Z should be strictly preferred to P chance of Y and (1 - P) chance of Z.
All the axioms are consequences, as well as antecedents, of a consistent utility function. So it must be possible to prove that the experimental subjects above can't have a consistent utility function over outcomes. And indeed, you can't simultaneously have:
These two equations are algebraically inconsistent, regardless of U, so the Allais Paradox has nothing to do with the diminishing marginal utility of money.
Maurice Allais initially defended the revealed preferences of the experimental subjects - he saw the experiment as exposing a flaw in the conventional ideas of utility, rather than exposing a flaw in human psychology. This was 1953, after all, and the heuristics-and-biases movement wouldn't really get started for another two decades. Allais thought his experiment just showed that the Axiom of Independence clearly wasn't a good idea in real life.
(How naive, how foolish, how simplistic is Bayesian decision theory...)
Surely, the certainty of having $24,000 should count for something. You can feel the difference, right? The solid reassurance?
(I'm starting to think of this as "naive philosophical realism" - supposing that our intuitions directly expose truths about which strategies are wiser, as though it was a directly perceived fact that "1A is superior to 1B". Intuitions directly expose truths about human cognitive functions, and only indirectly expose (after we reflect on the cognitive functions themselves) truths about rationality.)
"But come now," you say, "is it really such a terrible thing, to depart from Bayesian beauty?" Okay, so the subjects didn't follow the neat little "independence axiom" espoused by the likes of von Neumann and Morgenstern. Yet who says that things must be neat and tidy?
Why fret about elegance, if it makes us take risks we don't want? Expected utility tells us that we ought to assign some kind of number to an outcome, and then multiply that value by the outcome's probability, add them up, etc. Okay, but why do we have to do that? Why not make up more palatable rules instead?
There is always a price for leaving the Bayesian Way. That's what coherence and uniqueness theorems are all about.
In this case, if an agent prefers 1A > 1B, and 2B > 2A, it introduces a form of preference reversal - a dynamic inconsistency in the agent's planning. You become a money pump.
Suppose that at 12:00PM I roll a hundred-sided die. If the die shows a number greater than 34, the game terminates. Otherwise, at 12:05PM I consult a switch with two settings, A and B. If the setting is A, I pay you $24,000. If the setting is B, I roll a 34-sided die and pay you $27,000 unless the die shows "34", in which case I pay you nothing.
Let's say you prefer 1A over 1B, and 2B over 2A, and you would pay a single penny to indulge each preference. The switch starts in state A. Before 12:00PM, you pay me a penny to throw the switch to B. The die comes up 12. After 12:00PM and before 12:05PM, you pay me a penny to throw the switch to A.
I have taken your two cents on the subject.
If you indulge your intuitions, and dismiss mere elegance as a pointless obsession with neatness, then don't be surprised when your pennies get taken from you...
(I think the same failure to proportionally devalue the emotional impact of small probabilities is responsible for the lottery.)
Allais, M. (1953). Le comportement de l'homme rationnel devant le risque: Critique des postulats et axiomes de l'école américaine. Econometrica, 21, 503-46.
Kahneman, D. and Tversky, A. (1979.) Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47, 263-92.