Expected utility maximalisation is an excellent prescriptive decision theory. It has all the nice properties that we want and need in a decision theory, and can be argued to be "the" ideal decision theory in some senses.

However, it is completely wrong as a descriptive theory of how humans behave. Those on this list are presumably aware of oddities like the Allais paradox. But we may retain some notions that expected utility still has some descriptive uses, such as modelling risk aversion. The story here is simple: each subsequent dollar gives less utility (the utility of money curve is concave), so people would need a premium to accept deals where they have a 50-50 chance of gaining or losing $100.

As a story or mental image, it's useful to have. As a formal model of human behaviour on small bets, it's spectacularly wrong. Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behaviour forces their utility to become far too concave.

For illustration, let's introduce Neville. Neville is risk averse. He will reject a single 50-50 deal where he gains $55 or loses $50. He might accept this deal if he were really rich enough, and felt rich - say if he had $20 000 in capital, he would accept the deal. I hope I'm not painting a completely unbelievable portrait of human behaviour here! And yet expected utility maximalisation then predicts that if Neville had fifteen thousand dollars ($15 000) in capital, he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred and fifty thousand dollars ($150 000) - a ratio of a hundred to one between gains and losses!

To see this, first define define the marginal utility at $X dollars (MU($X)) as Neville's utility gain from one extra dollar (in other words, MU($X) = U($(X+1)) - U($X)). Since Neville is risk averse, MU($X) ≥ MU($Y) whenever Y>X. Then we get the following theorem:

  • If Neville has $X and rejects a 50-50 deal where he gains $55 or loses $50, then MU($(X+55)) ≤ (10/11)*MU($(X-50)).

This theorem is a simple result of the fact that U($(X+55))-U($X) must be greater than 55*MU($(X+55)) (each dollar up from the Xth up to the (X+54)th must have marginal utility at least MU($(X+55))), while U($X)-U($(X-50)) must be less than 50*MU($(X-50)) (each dollar from the (X-50)th up to (X-1)th must have marginal utility at most MU($(X-50))). Since Neville rejects the deal, U($X) ≥ 1/2(U($(X+55)) + U($(X-50)), hence U($(X+55))-U($X) ≤ U($X)-U($(X-50)), hence 55*MU($(X+55)) ≤ 50*MU($(X-50)) and the result follows.

Hence if we scale Neville's utility so that MU($15000)=1, we know that MU($15105) ≤ 10/11, MU($15210) ≤ (10/11)2, MU($15315) ≤ (10/11)3, ... all the way up to to MU($19935) = MU($(15000 + 47*105)) ≤ (10/11)47. Summing the series of MU's from $15000 to $(15000+48*110) = $20040, we can see that

  • U($20040) - U($15000) ≤ 105*(1+(10/11)+(10/11)2+...+(10/11)47) = 110*(1-(10/11)48)/(1-(10/11)) ≈ 1143.

One immediate result of that is that Neville, on $15000, will reject a 50-50 chance of losing $1144 versus gaining $5000. But it gets much worse! Let's assume that the bet is a 50-50 bet which involves losing $1500 - how far up in the benefits do we need to go before Neville will accept this bet? Now the marginal utilities below $15000 are bounded below, just as those above $15000 are bounded above. So summing the series down to $(15000-1500) = $13500 > $(15000 - 14*105):

  • U($15000) - U($13500) ≥ 105*(1+(11/10)+...+(11/10)13) = 105*(1-(11/10)14)/(1-(11/10)) ≈ 2937.

So gaining $5040 from $15000 will net Neville (at most) 1143 utilons, while losing $1500 will lose him (at least) 2937. The marginal utility for dollars above the 20040th is at most (10/11)47 < 0.012. So we need to add at least (2937-1143-1)/0.012 ≈ 149416 extra dollars before Neville would accept the bet. So, as was said,

  • If Neville had fifteen thousand dollars ($15 000), he would reject a 50-50 bet that either lost him fifteen hundred dollars ($1 500), or gained him a hundred a fifty thousand dollars ($150 000).

These bounds are not sharp - the real situation is worse than that. So expected utility maximisation is not a flawed model of human risk aversion on small bets - it's a completely ridiculous model of human risk aversion on small bets. Other variants such as prospect theory perform a better job at the descriptive task, though as usual in the social sciences, they are flawed as well.

New Comment
36 comments, sorted by Click to highlight new comments since: Today at 6:04 AM

What always gets me in experiments offering gambles is the implictly unquestioned assumption that it's rational for a subject to assume that the claimed odds of a bet are in fact the actual odds of the bet. That would certainly make the analysis much simpler, but a tempting simplification isn't necessarily an accurate one.

Just because we focus on the likely irrationality of Neville refusing to bet $50 at purportedly even odds against $55, and ignore the similar irrationality of an experimenter offering to bet $55 at even odds against $50, that doesn't mean Neville isn't updating his beliefs based on the experimenter's behavior. If Neville then stubbornly assigns expected probabilities other than .5 and .5 to the bet outcomes, must he be an irrational person who is doomed to forgo a bounty of cash from generous economics researchers, or might he be a rational person who is merely inducting properly from his prior observations of three-card monty tables and extended warranty offers?

[-]Kindly12y100

When I read "Neville [...] will reject a single 50-50 deal where he gains $55 or loses $50" the first thing I do is ask myself: "Can I imagine myself rejecting a similar 50-50 deal?" Because if I can't imagine that, then the thought experiment can't possibly apply to the way I think about money.

In this case, though, I have no trouble imagining this. I have some reservations about $20000 being the cutoff, but I'm willing to accept that for now to see the math; also, I believe in geometric progressions, so I suspect the cutoff doesn't matter too much.

If hypothetical Neville refused the bet because he suspects it's rigged, that doesn't affect me. When I checked Neville's refusal against my own intuitions, I accepted the 50/50 odds as given. I suppose it's possible that I'm subconsciously being suspicious of the odds, and that is leading me to be risk averse. Is that what you're suggesting?

Subconscious suspicion is one possibility; evolution only cares about your behavior, not so much about how much introspection you did to get there.

It's certainly not the only possibility, though. Another example: Reduce the bet to 55 cents vs 50 cents and I'd imagine refusing it myself, for the obvious reason that the expected gain is grossly less than the transaction costs of stopping to think about the bet, look for possible "catches", flip the coin, and collect any winnings. There's probably other rational reasons to be "bet averse" that I haven't thought of, too.

I have some reservations about $20000 being the cutoff, but I'm willing to accept that for now to see the math; also, I believe in geometric progressions, so I suspect the cutoff doesn't matter too much.

If you remove the cutoff, then Neville will not accept 50-50 odds of losing $1500 or winning any amount of money.

What I meant is, I believe that there's a cutoff, but I'm not sure $20000 is the right value, for me. As I said, I don't think the value of the cutoff is terribly important.

What always gets me in experiments offering gambles is the implictly unquestioned assumption that it's rational for a subject to assume that the claimed odds of a bet are in fact the actual odds of the bet. That would certainly make the analysis much simpler, but a tempting simplification isn't necessarily an accurate one.

Yes. I think I already mentioned that the real reason why I wouldn't take a 50% chance of winning $110 and 50% chance of losing $100 is that if someone is willing to offer such a bet to me, then they most likely know something about the coin to be flipped that I don't. If I was offered such a bet in a way extremely hard to cheat at (say, using random.org), I would happily take it -- but I don't expect anyone doing that, anyway.

[-]gjm12y110

The following position seems fairly plausible to me. (1) Diminishing marginal utility is the only good strong reason for risk aversion; that is, the only thing that can justify a large difference between the value of $X and half the value of $2X. (2) But there are some good but weak reasons, which can't produce a large difference -- but if X is small then they may justify a fairly substantial relative difference. (3) Something a bit like actual human risk-averse behaviour can be justified by the combination of (1) when large sums are at stake and (2) when small sums are at stake.

(This is very strongly reminiscent of what happens with payday loan companies, which make small short-term loans with charges that are not very large in absolute terms but translate to absolutely horrifying numbers if you convert them to APRs; this isn't only because payday lenders are evil sharks (though they might be) and payday loans are really high risk (though they might be) but also because some of the cost of lending money is more or less independent of the size and duration of the loan. If I lend you $100 for a day and charge $1 for the effort of keeping track of what I'm owed by whom and when, that's an APR of over 3000%, but it's not obviously unreasonable even so: most of that $1 isn't really interest as such.)

[-]Irgy12y70

So in other words, people's actual behaviour does not fit a (particular) simple mathematical rational model? Why is this surprising to anyone? Non-linear utility is a rationalisation and broad justification of risk aversion, but are there really people who think it's an accurate descriptive model of actual human behaviour? The whole concept of trying to fit a rational model to human behaviour seems pretty optimistic to me.

I also take issue with this quote: "[expected utility maximisation is] a completely ridiculous model of human risk aversion on small bets" You've shown that only by extrapolating behaviour on small bets to behaviour on large bets. To me that's similar to saying "Newtonian mechanics is a completely ridiculous model of ordinary scale physics" by extrapolating its behaviour to relativistic scales. Whether it's a good model on small bets is a function of its behaviour on small bets, not its extrapolated behaviour on large bets. I know I at least would use a completely different mindset on large bets than I would on small anyway, and would make no claim of the two being consistent under any single model.

That said, I agree with the conclusion if not the method. I would if anything be more risk averse on large bets not less. Risk aversion on small bets seems irrational to me in the first place. Utility should be approximately linear at small scales. Then again, I would take the 50-50 chance at $55 over $50 in the first place so maybe I'm not the sort of person you're talking about.

So in other words, people's actual behaviour does not fit a (particular) simple mathematical rational model? Why is this surprising to anyone?

You haven't met many economists, have you? :-)

The key assumption that leads to problems in trying to descriptively model people's decisions is just that people have a single consistent utility function, which is defined in terms of the amount of money that they have.

If someone starts with $18,000 and then gets $40, the assumption is that the benefit can be expressed as U(18,040)-U(18,000). Or, in words, the person thinks: getting $40 brought me from a world where I have $18,000 to a world where I have $18,040. I value a world where I have $18,000 at this amount, and I value a world where I have $18,040 at that amount, and the benefit of getting the $40 is just the difference between those two.

In this model, there is a single curve that you can plot of how much you value a world where you had $X. Think about what that curve would look like, with X ranging from 10,000 to 100,000. In nearly every plausible case, that curve will be close to linear on a small scale (in the range of 10s or 100s) almost everywhere. There may be some curvature to it (perhaps your curve resembles f(x)=log(x)), but if you zoom in then that will mostly go away and a straight line will give you a good fit (e.g., if you are looking at changes in x that are 2 orders of magnitude smaller than x, then a log function will look pretty much linear). In a few special cases, there may be a large sudden jump in the curve, if there is some specific thing that you really want to buy and there is a large benefit to suddenly being able to afford it, but those cases are rare. For the most part, U(x) will be relatively smooth, and it will be growing perceptibly over the whole range (even if your utility function is bounded, it's not like it will be almost at that bound before you even have $100,000).

And if your curve is approximately linear over small scales, then expected utility theory basically reduces to expected value theory when the stakes are small (e.g., a 50% of gaining $40 has an EV of $20). If U(x) is close to linear from x=18,000 to x=18,040, then U(18,020) must be about halfway in between U(18,000) and U(18,040). If you have $18,000, and are basing your decisions on a consistent utility function U(x), then for pretty much any plausible U(x) you'll prefer a 51% chance of gaining $40 to a 100% chance of gaining $20 (unless you just happen to have one of those rare big jumps in U(x) between $18,000 and $18,020 - perhaps you really really want something that costs $18,010?). The expected value is 2% higher ($20.40 vs. $20), and it's not plausible that your U(x) would be so sharply curved that you'd be willing to give up 2% EV over such a narrow range of x (it's just a 0.2% increase in x from $18,000 to $18,040).

Probably the most important feature of prospect theory is that it does away with this assumption of a single consistent utility function, and says that people value gambles based on the change from the status quo (or occasionally some other reference point, but we'll ignore that wrinkle here). So people think about the value of gaining $40 as U(+40) - it's whatever I have now plus forty dollars. The gamble in the previous paragraph now involves comparing U(+0), U(+20), and U(+40), rather than U(18,000), U(18,020), and U(18,040). It is no longer true that the scale of the change is small relative to the total amount, because the scale of the change sets the scale. So if there is any nonlinear curvature in your utility function, we can't get rid of it by zooming in to the point where we can use linear approximations, because no matter what we'll be looking at the function from U(+0) to U(+x). The utility function is at its curviest (least linear) near zero (think about log(x), or even sqrt(x)), and every change is defined relative to the status quo U(+0), so the curviest part of the curve is influencing every decision.

An assumption here that needs to be abandoned in order to have an accurate descriptive model of human decision making is that people have a single consistent utility function, which is defined in terms of the amount of money that they have.

That wasn't an assumption to be abandoned, that was the beginning of a proof by contradiction.

No disagreement; that was just sloppy wording on my part. Edited. My comment is basically just repeating the argument in the original post, with less math and different emphasis.

That paper has shown up here before, and I still don't like it.

Basically, the way that he presents his 'aversion' criterion may sound innocuous but it has really pernicious implications. Rabin thinks the pernicious implications means he's poked a hole in risk aversion- but instead he's just identified an incredibly terrible way to elicit aversion parameters. If any decision analyst was told by their client tell them that they'd turn down a -100/+105 bet with $345,000 in the bank, they'd start a long talk designed to make the client comfortable with the mathematics of decision-making under uncertainty, not take that as a reflectively endorsed preference.

Put another way, I don't think that Neville as stated actually exists (or is sane if he does exist). He might express those preferences under the framing of {.5 -50; .5 +55}, but I don't think he would reflectively endorse them under the framing of {.5 14950, .5 15055}, and real bets may be difficult to separate from emotional or status effects that invalidate the idea of preferences only being a function of wealth level rather than wealth history (which is a very different sort of aversion than utility functions that are concave in money).

Like you say in the conclusion, prospect theory is a better attempt to understand descriptive decision-making, but concave utility functions are a useful prescriptive tool.

This theorem is a simple result of the fact that U($(X+55))-U($X) must be greater than 55MU($(X+54)) (each dollar up from the Xth up to the (X+54)th must have marginal utility at least MU($(X+55))), while U($X)-U($(X-50)) must be less than 50MU($(X-100)) (each dollar from the (X-50)th up to (X-1)th must have marginal utility at most MU($(X-50))).

Are you sure you didn't intend to have (X+55) in the formula just after "greater than" instead of (X+54) and (X-50) in the last formula before the final parenthetical instead of (X-100)?

Also I think the formulas would be more readable if you omitted the dollar sign.

Also I think the formulas would be more readable if you omitted the dollar sign.

The best idea IMO is having it only with numbers, e.g. U(X + $54).

Good catch and corrected!

If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic.

Clearly unrealistic? EY (IIRC) once mentioned a study where a largish fraction of respondents explicitly preferred 100% probability of getting $500 to 15% probability of getting $1,000,000.

For that strong claim, I think you'll need to give a reference.

Very interesting (i wish they'd gone into more details about that particular choice!) Though it doesn't change the fact that diminishing marginal utility doesn't explain betting behaviour for most people; I know enough people who'd reject the 50-50 gamble on +55, -50, but accept the higher gamble.

Aren't humans not so much risk-averse as loss-averse?

Is there a difference, given that there are rather few win-averse people?

You can distinguish the two by offering people choices between a sure $50 and a 50-50 bet paying $0 or $100, and see if their behaviour differs from bets with losses.

Given a choice between losing $50 or a 50% chance of losing $100, a risk averse person loses the $50 and the loss-averse person takes the 50% chance of losing $100.

Given a choice between gaining $50 or a 50% chance of gaining $100, a risk averse person chooses to gain the $50 and the loss-averse person doesn't care which option he gets.

Thank you! I've heard this argument vaguely alluded to before, so I'm very happy to see a post about it. I'm still not sure what I think about it, though, because decreasing marginal utility felt like it was the only good reason to be risk averse. So how am I supposed to model myself now?

[-]gjm12y10

If decreasing marginal utility is the only good reason to be risk averse but you're more risk averse than it can justify, then you should (1) model yourself in some empirical way that gives a reasonable description of your behaviours (you probably have such a model already, albeit implicit) and (2) try to be less risk averse.

Real people are risk averse not only because of decreasing marginal utility, but also because they see "I had the choice to refuse a bet but did not take that choice, and consequently I lost money through my own choice," as an additional bad thing distinct from losing money.