I think this kind of loss aversion is entirely sensible for IRL betting, but makes no sense for the platonic ideal betting.
Transaction costs exist. How big those costs are depends on circumstances. For example if you need to exchange money into/ out of USD then the transaction costs are larger.
It's also not clear that this bet isn't a scam of some kind. Especially as there is no clear reason for a non-scammer to offer this bet.
This is much the same as that "doctor cutting up healthy patient for organs" moral dilemma, where the question asserts there is 0 chance of getting caught, but human intuitions are adjusted to a non-zero chance of getting caught.
Human intuition automatically assumes things aren't frictionless spheres in a vacuum, even when the question asserts that it is.
You can't just tell people that this is definitely not a scam and get people to say what they would do if they somehow gained 100% certainty it wasn't a scam. Human intuitions are adjusted for the presence of scams. The calculations our mind is running don't have a "pretend scams don't exist" option.
It's also not clear that this bet isn't a scam of some kind. Especially as there is no clear reason for a non-scammer to offer this bet.
The standard scenario would be like this:
I send you a check for $110. Then we arrange a Zoom meeting, and you flip a fair coin in front of the camera. Heads, you keep the money. Tails, you send $210 to a specified Bitcoin address.
Either way, the check bounces.
And the coin flip is prerecorded, with the invisible cut hidden in a few moments of lag.
And this also adds the general hassle of arranging a zoom meeting, being online at the right time and cashing in the check.
A college student might have a budget for the month. If they lose 100$ out of a budget of 300$ that they usually need the pain might be bigger than the gain from having a 410$ budget instead of the 100$ budget.
If however offered the bet for 1000$ it makes sense to somehow get a loan for the 1000$ given the large potential upside.
Just because the utility function that models this is more complex than what you assume doesn't mean that it's not an utility function.
While this is all good speculation, you could also look at the experimental design in the linked paper :)
Your post makes the claim
If you rejected the first bet and accepted the second bet, just that[4] is enough to rule you out from having any[5] utility function consistent with your decisions.[6]
I argue that the claim is false. Even if there's a claim in the link paper that's more complex and that's true, that doesn't change that the claim you are making in the beginning still is false.
These results hold only if you assume risk aversion is entirely explained by a concave utility function, if you don't assume that then the surprising constraints on your preferences don't apply
IIRC that's the whole point of the paper - not that utility functions are in fact constrained in this way (they're not), but if you assume risk aversion can only come from diminishing maginal value of money (as many economists do), then you end up in weird places so maybe you should rethink that
The actual determinant here is whether or not you enjoy gambling.
Person A who regularly goes to a casino and bets 100 bucks on roulette for the fun of it will obviously go for bet 1. In addition to the expected 5 buck profit, they get the extra fun of gambling, making it a no-brainer. Similarly, bet 2 is a no brainer.
Person B who hates gambling and gets super upset when they lose will probably reject bet 1. The expected profit of 5 bucks is outweighed by the emotional cost of doing gambling, a thing they are upset by.
When it comes to bet 2, person B still hates gambling, but the expected profit is ridiculously high that it exceeds the emotional cost of gambling, so they take the bet.
Nobody is necessarily being actually irrational here, when you account for non-monetary costs.
I think that what is going on is roughly something like this. People know that "Gambling is bad.", it can lead to addiction, and people who engage heavily with lots of gambling can mess up their lives.
So, a useful heuristic is "don't gamble", which is probably what most young people are either explicitly or implicitly told as children. So, the first experiment is really measuring whether $5 in expected winnings is enough to overcome people's trained aversion to casinos and gambling. You may as well be offering them $5 in exchange for trying a cigarette, I think that is the close analogy. By accepting this bet they are adopting a policy which says that sometimes gambling is OK, and they recognise that this policy is potentially dangerous, and that changing a general life policy, with all the due diligence that should rightly be afforded to that, is worth less than an expected $5.
When they see the second bet the potential winnings are so huge that it swamps that. Even if they value the general heuristic at $1000 they still take the bet. And besides this is such an outlier of a bet that is doesn't have to imply a general change in policy. And if they win they are not exactly going to need to ever gamble again anyway.
If I am right there are probably ways of re-structuring the game that would make it look less like gambling, and those tricks would significantly change people's behaviour.
I get a strong "our physical model says that spherical cows can move with way less energy by just rolling, thereby proving that real cows are stupid when deciding to walk" vibe here.
Loss aversion is real, and is not especially irrational. It’s simply that your model is way too simplistic to properly take it into account.
If I have $100 lying around, I am just not going to keep it around "just in case some psychology researcher offers me a bet". I am going to throw out in roughly 3 baskets of money : spending, savings, and emergency fund. The policy of the emergency fund is "as small as possible, but not smaller". In other words : adding to the balance of that emergency funds is low added util, but taking from it is high (negative) util.
The loss from an unexpected bet is going to mostly be taken from the emergency fund (because I can’t take back previous spendings, and I can’t easily take from my savings). On the positive side (gain), any gain will be put into spendings or savings.
So the "ratio" you’re measuring is not a sample from a smooth, static "utility of global wealth". I am constantly adjusting my wealth assignment such that, by design and constraint, yes, the disutility of loss is brutal. If I weren’t, I would just be leaving util lying on the ground, so to speak (I could spend or save).
You want to model this ?
Ignore spending. Start with an utility function of the form U(W_savings, W_emergency_fund). Notice that dU/dW_emergency_fund is large and negative on the left. Notice that your bet is 1/2 U(W_savings + 110, W_emergency_fund) + 1/2 U(W_savings, W_emergency_fund - 100).
I have not tested, but I’m ready to bet (heh !) that it is relatively trivial to construct a reasonable utility function that says no to the first bet and yes to the second if you follow this model and those assumptions about the utility function.
(there is a slight difficulty here : assuming that my current emergency fund is at its target level, revealed preference shows that obviously dU/dW_savings > dU/dW_emergency_funds. An economist would say that obviously, U is maximized where dU/dW_savingqs = dU/dW_emergency_funds)
Yeah. See also Stuart's post:
Expected utility maximization is an excellent prescriptive decision theory... However, it is completely wrong as a descriptive theory of how humans behave... Matthew Rabin showed why. If people are consistently slightly risk averse on small bets and expected utility theory is approximately correct, then they have to be massively, stupidly risk averse on larger bets, in ways that are clearly unrealistic. Put simply, the small bets behavior forces their utility to become far too concave.
There are ontological assumptions built into the question. It is assumed that "utility" is f(money owned), for some increasing function f, and that an "action" is a single monetary transaction.
Someone trying to maximise their long-term growth of wealth has no such function f. The material circumstances that their utility (if they have one) is a function of is not their current bankroll. Neither are their actions single transactions, but long-term policies. That they have no "utility function" in the terms originally posed is not evidence of irrationality, but evidence that if they do have any sort of utility function, its domain is not that envisaged in the statement of the paradox. So also their repertoire of "actions".
Maximising long-term growth leads in ideal circumstances to Kelly betting, and in non-ideal (i.e. real) circumstances to something more conservative. Zvi recommends 25 to 50% of the Kelly bet.
This, it seems to me, dissolves the paradox. We need not say that "loss aversion ... makes no sense for the platonic ideal betting.". Nor need we reject considering the ideal, scam-free situation (even while always considering one's counterparties' probity and reliability when acting in the real world). Nor need we feel compelled by a clever argument to turn down vast but only 50% certain possibilities. We need only bet no more than half of our bankroll on them.
- ^ plus rejecting the first bet even if your total wealth was somewhat different
This assumption directly contradicts Kelly betting. The Kelly bettor will accept the win $110/lose $100 bet if their wealth is at least $2200, and not otherwise. Someone more conservative than to bet the full Kelly will require a correspondingly larger bankroll before playing.
When this paradox gets talked about, people rarely bring up the caveat that to make the math nice you're supposed to keep rejecting this first bet over a potentially broad range of wealth.
This is exactly the first thing I bring up when people talk about this.
But counter-caveat: you don't actually need a range of $1,000,000,000. Betting $1000 against $5000, or $1000 against $10,000, still sounds appealing, but the benefit of the winnings is squished against the ceiling of seven hundred and sixty nine utilons all the same. The logic doesn't require that the trend continues forever.
I don't think so? The 769 limit is coming from never accepting the 100/110 bet at ANY wealth, which is a silly assumption
Suppose you'll only reject the bet when your net worth is under $20,000, and you'll accept it above. Can you see why, if you have a utility function, it's still implied that up to $20,000, the positive dollars are worth less than 10/11 the negative dollars?
And then once you have that, does it make sense that the marginal utility of money is going down (at least) exponentially up to $20,000?
The assumption that the marginal utility of wealth decreases exponentially doesn't seem justified to me. Why not some other positive-but-decreasing function, such as 1/W
(which yield a logarithmic utility function)?
What properties does the utility function need to have for this result to generalize, and are those priorities reasonable to assume?
1/W is totally fine! If that was your utility function you'd reject the bet at low wealth and accept it at high wealth.
The exponentially decreasing thing is just a bound - on the domain where you reject the bet your marginal utility of money will be decreasing faster than .
Sure, but that does imply that your marginal utility of money decreases that fast outside that domain.
The claim is false.
Suppose we're in a universe where a fixed 99% of "odds in your favour" bets are scams where I always lose (even if we accept the proposal that the coin is actually fair). This isn't reflective of the world we're actually in, but it's certainly consistent with some utility function. We can even assume that money has linear utility if you like.
Then I should reject the first bet and accept the second.
Yes to both, easy, but that's because I can afford to risk $100. A lot of people can't nowadays. "plus rejecting the first bet even if your total wealth was somewhat different" is doing a lot of heavy lifting here.
I think there are some interesting things in for example analysing how large of a pot you should enter if you're a professional poker player based on your current spendable wealth. I think the general theory is to not go above 1/100th and so it my actually be rational for the undergraduates not to want to take the first option.
Here's a taleb (love him, hate him) video on how that comes about: https://youtu.be/91IOwS0gf3g?si=rmUoS55XvUqTzIM5
Kelly criterion arguments implicitly slip in just the sort of "population ethics over future selves" reasoning I mentioned - treating your future selves as a sort of population, you don't just want that population to have a high mean winnings driven by a few outliers, you want most of that population to be well off even if it means lower average earnings.
Also, I have accidentally tricked you, sorry - the $100 example is from the 2000 paper and seems more intuitive to me, so I used it, but the paper trying this on undergrads used $10 and $11. For students to be worried because of Kelly considerations, their bankroll would have to be on the order of $250.
you want most of that population to be well off even if it means lower average earnings.
This also comes up in the paradox pointed out by Ole Peters, that if offered a repeated 50% chance to either increase your bankroll by 50% or decrease it by 40%, if you chase the expected money then you almost certainly lose almost everything. Your possible enormous profit is concentrated into a smaller and smaller sliver of the space of possible outcomes.
How would you bet?
ETA: I see that Taleb mentions Peters (favourably) near the end of the video linked in Hallgren's comment above..
I honestly don't understand what high multiples of my current utility would look like. So barring a better understanding of how my preferences and the world interact I'd have to pass (or claim that the game is confused) even if the game was advertised as playing for utilons and not just money.
In the very beginning of the post, I read: "Quick psychology experiment". Then, I read: "Right now, if I offered you a bet ...". Because of this, I thought about a potential real life situation, not a platonic ideal situation, that the author is offering me this bet. I declined both bets. Not because they are bad bets in an abstract world, but because I don't trust the author in the first bet and I trust them even less in the second bet.
If you rejected the first bet and accepted the second bet, just that is enough to rule you out from having any utility function consistent with your decisions.
Under this interpretation, no it doesn't.
Could you, the author, please modify the thought experiment to indicate that it is assumed that I completely trust the one who is proposing the bet to me? And, maybe discuss other caveats too. Or just say that it's Omega who's offering me the bet.
Sure. In fact, it might be good if I included a footnote describing the experimental design of the experiment on undergrads anyway.
I like how Jason Collins frames it:
Consider the following claim:
We don’t need loss aversion to explain a person’s decision to reject a 50:50 bet to win $110 or lose $100. That is just risk aversion as in expected utility theory.
Rabin’s argument starts with a simple bet: suppose you are offered a 50:50 bet to win $110 or lose $100, and you turn it down. Suppose further that you would reject this bet no matter what your wealth (this is an assumption we will turn to in more detail later). What can you infer about your response to other bets?
I would have highlighted the detail about needing to reject the bet at any wealth level for the argument to apply. I believe that making it a footnote was a mistake which makes the rest of the post much harder to follow because it's very easy to miss a key underlying assumption.
You do not need to reject the bet at every wealth level. The point is that over the domain that you do reject the bet, your (hypothetical) marginal utility of money would be decreasing at least as fast as the derived exponential (on average).
Could that domain not just be really small, such that the ratio of outcomes you'd accept the bet at get closer and closer to 1? It seems like the premise that the discounting rate stays constant over a large interval (so we get the extreme effects from exponential discounting) is doing the work in your argument, but I don't see how it's substantiated.
Yeah, this is a good point. In the mathematical argument it simply has to be assumed as an input that the response is the same over at least a several-thousand-dollar span. But does that seem to bear out in the data about real humans? I think so. If you have a bunch of people who exhibit similar apparent risk aversion, spread out over a variety of wealths and dispositions, it seems like it would be a miracle for them to all be just below the level of wealth where they'd change their minds.
Yeah, I guess you can get away with a weaker assumption. But it's an important enough assumption that it should be stated.
That $769 number might be more relevant than you expect for college undergrads participating in weird psychology research studies for $10 or $25 depending on the study.
There are so many side-effects this overlooks. Winning $110 complicates my taxes by more than $5. In fact, once gambling winnings taxes are considered, the first bet will likely have a negative EV!
Real life translations:
Expected value = That thing that never happens to me unless it is a bad outcome
Loss aversion = Its just the first week of this month and I have already lost 12 arguments, been ripped off twice and have gotten 0 likes on Tinder
A fair coin flip = Life is not fair
Utility function = Static noise
Quick psychology experiment
Right now, if I offered you a bet[1] that was a fair coin flip, on tails you give me $100, heads I give you $110, would you take it?
Got an answer? Good.
Hover over the spoiler to see what other people think:
About 90% of undergrads will reject this bet[2].
Second part now, if I offered you a bet that was a fair coin flip, on tails you give me $1000, on heads I give you $1,000,000,000, would you take it?
Got an answer?
Hover over the spoiler to reveal Rabin's paradox[3]:
If you rejected the first bet and accepted the second bet, just that[4] is enough to rule you out from having any[5] utility function consistent with your decisions.[6]
What? How?
The general sketch is to suppose there was some utility function that you could have (with the requisite nice properties), and show that if you reject the first bet (and would keep rejecting it within a couple-thousand-dollar domain), you must have an extreme discount rate when the betting amounts are extrapolated out.
If you reject the first bet, then the average utility (hypothesizing some utility function U) of engaging in the bet is less than the status quo: 0.5⋅U(W+$110) + 0.5⋅U(W-$100) < U(W). In other words, the positive dollars are worth, on average, less than 10/11 as much as the negative dollars.
But if you keep rejecting this bet over a broad range of possible starting wealth W, then over every +$110/-$100 interval in that range the positive dollars are worth less than 10/11 the negative dollars. If every time you move up an interval you lose a constant fraction of value, that's exponential discounting.
How to turn this into a numerical answer? Well, just do calculus on the guess that each marginal dollar is worth exponentially less than the last on average.
Some numbers, given this model:
The benefit from gaining the first $1 is about 1 utilon.
The maximum benefit from gaining $1,000,000,000 is a mere seven-hundred and sixty nine utilons. This is also the modeled benefit from gaining infinity dollars, because exponential discounting.
The minimum detriment from losing $1000 is over two thousand utilons.
So a discount of 100/110 over a span of $210 seems to imply that there is no amount of positive money you should accept against a loss of $1000.
Caveat and counter-caveat
When this paradox gets talked about, people rarely bring up the caveat that to make the math nice you're supposed to keep rejecting this first bet over a potentially broad range of wealth. What if you change your ways and start accepting the bet after you make your first $100,000? Then the utility you assign to infinite money could be modeled as unbounded.
This suggests an important experimental psychology experiment: test multi-millionaires for loss aversion in small bets.
But counter-caveat: you don't actually need to refuse the bet up to $1,000,000,000. The mathematical argument above says that on the domain you reject the bet, your hypothetical marginal of utility of money would be decreasing at least as fast on average as e−0.0013w. If you start accepting the bet once you have $100,000, then what we infer is that this hypothetical maximum average marginal utility decreases exponentially up to $100,000.
We even know a little bit more: above $100,000 the marginal utility of money is still bounded to be less than what it was at $100,000 (assuming there's no special amount of money where money suddenly becomes more valuable to you). If your starting wealth was, say, $50,000, then the exponential decay up to $100,000 has already shrunk the marginal utility of money past that by 5⋅10−29!
So the paradox still works perfectly well if you'll only reject the first bet until you have $5000 or $10,000 more. Betting $1000 against $5000, or $1000 against $10,000, still sounds appealing, but the benefit of the winnings is squished against the ceiling of seven hundred and sixty nine utilons all the same. The logic doesn't require that the trend continues forever.
The fact of the matter is that not accepting the bet of $100 against $110 is the sort of thing homo economicus would do only if they were nigh-starving and losing another $769 or so would completely ruin them. When real non-starving undergrads refuse the bet, they're exhibiting loss aversion and it shouldn't be too surprising that you can find a contrasting bet that will show that they're not following a utility function.
Is loss aversion bad?
One can make a defense of loss aversion as a sort of "population ethics of your future selves." Just as you're allowed to want a future for humanity that doesn't strictly maximize the sum of each human's revealed preferences (you might value justice, or diversity, or beauty to an external observer), you're also "allowed" to want a future for your probalistically-distributed self that doesn't strictly maximize expected value.
But that said... c'mon. Most loss aversion is not worth twisting yourself up in knots to protect. It's intuitive to refuse to risk $100 on a slightly-positive bet. But we're allowed to have intuitions that are wrong.
If you're curious about the experimental details, suppose that you've signed up for a psychology experiment, and at the start of it I've given you $100, had you play a game to distract you so you feel like the money is yours, and then asked you if you want to bet that money.
Bleichrodt et al. (2017) (Albeit for smaller stakes and with a few extra wrinkles to experimental design)
Which, as is par for the course with names, was probably first mentioned by Arrow, as Rabin notes.
plus rejecting the first bet even if your total wealth was somewhat different
(concave, continuous, state-based)
Rabin (2000)