Expected utility maximalisation is an excellent prescriptive decision theory. It has all the nice properties that we want and need in a decision theory, and can be argued to be "the" ideal decision theory in some senses.
However, it is completely wrong as a descriptive theory of how humans behave. Those on this list are presumably aware of oddities like the Allais paradox. But we may retain some notions that expected utility still has some descriptive uses, such as modelling risk aversion. The story here is simple: each subsequent dollar gives less utility (the utility of money curve is concave), so people would need a premium to accept deals where they have a 50-50 chance of gaining or losing $100.
As a story or mental image, it's useful to have. As a formal model of human behaviour on small bets, it's spectacularly wrong. Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behaviour forces their utility to become far too concave.
For illustration, let's introduce Neville. Neville is risk averse. He will reject a single 50-50 deal where he gains $55 or loses $50. He might accept this deal if he were really rich enough, and felt rich - say if he had $20 000 in capital, he would accept the deal. I hope I'm not painting a completely unbelievable portrait of human behaviour here! And yet expected utility maximalisation then predicts that if Neville had fifteen thousand dollars ($15 000) in capital, he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred and fifty thousand dollars ($150 000) - a ratio of a hundred to one between gains and losses!
To see this, first define define the marginal utility at $X dollars (MU($X)) as Neville's utility gain from one extra dollar (in other words, MU($X) = U($(X+1)) - U($X)). Since Neville is risk averse, MU($X) ≥ MU($Y) whenever Y>X. Then we get the following theorem:
- If Neville has $X and rejects a 50-50 deal where he gains $55 or loses $50, then MU($(X+55)) ≤ (10/11)*MU($(X-50)).
This theorem is a simple result of the fact that U($(X+55))-U($X) must be greater than 55*MU($(X+55)) (each dollar up from the Xth up to the (X+54)th must have marginal utility at least MU($(X+55))), while U($X)-U($(X-50)) must be less than 50*MU($(X-50)) (each dollar from the (X-50)th up to (X-1)th must have marginal utility at most MU($(X-50))). Since Neville rejects the deal, U($X) ≥ 1/2(U($(X+55)) + U($(X-50)), hence U($(X+55))-U($X) ≤ U($X)-U($(X-50)), hence 55*MU($(X+55)) ≤ 50*MU($(X-50)) and the result follows.
Hence if we scale Neville's utility so that MU($15000)=1, we know that MU($15105) ≤ 10/11, MU($15210) ≤ (10/11)2, MU($15315) ≤ (10/11)3, ... all the way up to to MU($19935) = MU($(15000 + 47*105)) ≤ (10/11)47. Summing the series of MU's from $15000 to $(15000+48*110) = $20040, we can see that
- U($20040) - U($15000) ≤ 105*(1+(10/11)+(10/11)2+...+(10/11)47) = 110*(1-(10/11)48)/(1-(10/11)) ≈ 1143.
One immediate result of that is that Neville, on $15000, will reject a 50-50 chance of losing $1144 versus gaining $5000. But it gets much worse! Let's assume that the bet is a 50-50 bet which involves losing $1500 - how far up in the benefits do we need to go before Neville will accept this bet? Now the marginal utilities below $15000 are bounded below, just as those above $15000 are bounded above. So summing the series down to $(15000-1500) = $13500 > $(15000 - 14*105):
- U($15000) - U($13500) ≥ 105*(1+(11/10)+...+(11/10)13) = 105*(1-(11/10)14)/(1-(11/10)) ≈ 2937.
So gaining $5040 from $15000 will net Neville (at most) 1143 utilons, while losing $1500 will lose him (at least) 2937. The marginal utility for dollars above the 20040th is at most (10/11)47 < 0.012. So we need to add at least (2937-1143-1)/0.012 ≈ 149416 extra dollars before Neville would accept the bet. So, as was said,
- If Neville had fifteen thousand dollars ($15 000), he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred a fifty thousand dollars ($150 000).
These bounds are not sharp - the real situation is worse than that. So expected utility maximisation is not a flawed model of human risk aversion on small bets - it's a completely ridiculous model of human risk aversion on small bets. Other variants such as prospect theory perform a better job at the descriptive task, though as usual in the social sciences, they are flawed as well.
That paper has shown up here before, and I still don't like it.
Basically, the way that he presents his 'aversion' criterion may sound innocuous but it has really pernicious implications. Rabin thinks the pernicious implications means he's poked a hole in risk aversion- but instead he's just identified an incredibly terrible way to elicit aversion parameters. If any decision analyst was told by their client tell them that they'd turn down a -100/+105 bet with $345,000 in the bank, they'd start a long talk designed to make the client comfortable with the mathematics of decision-making under uncertainty, not take that as a reflectively endorsed preference.
Put another way, I don't think that Neville as stated actually exists (or is sane if he does exist). He might express those preferences under the framing of {.5 -50; .5 +55}, but I don't think he would reflectively endorse them under the framing of {.5 14950, .5 15055}, and real bets may be difficult to separate from emotional or status effects that invalidate the idea of preferences only being a function of wealth level rather than wealth history (which is a very different sort of aversion than utility functions that are concave in money).
Like you say in the conclusion, prospect theory is a better attempt to understand descriptive decision-making, but concave utility functions are a useful prescriptive tool.