You cannot feed a probability of 1 into a utility function. It makes no sense.
I think "U(Certainty)" was meant to be shorthand for U(feeling of certainty). Otherwise - well said.
Hmmm.... I thought the point of your article at http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/ was that the difference between 1 and .99 was indeed much larger than, say, .48 and .49.
Anyway, what if we try this one on for size: let's say you are going to play a hand of Texas Hold 'em and you can choose one of the following three hands (none of them are suited): AK, JT, or 22. If we say that hand X > Y if hand X will win against hand Y more that 50% of the time, then AK > JT > 22 > AK > JT ..... etc. So in this case couldn't one choose rationally and yet still be a "money pump"?
No, because you don't want to switch to a hand that will beat the one you have, you want to switch to one that's more likely to beat your opponent's (unknown, fixed) hand. That's necessarily transitive.
Okay, just one more question, Eliezer: when are you going to sit down and condense your work at Overcoming Bias into a reasonably compact New York Times bestseller?
Hmmm.... I thought the point of your article at http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/ was that the difference between 1 and .99 was indeed much larger than, say, .48 and .49.
Heh, I wondered if someone would bring that up.
You have to use the right distance measure for the right purpose. The coherence proofs on Bayes's Theorem show that if you want the distance between probabilities to equal the amount of evidence required to shift between them, you have no choice but to use the log odds.
What the coherence proofs for the expected utility equation show, is more subtle. Roughly, the "distance" between probabilities corresponds to the amount of one outcome-shift that you need to compensate for another outcome-shift. If one unit of probability goes from an outcome of "current wealth + $24,000" to an outcome of "current wealth", how many units of probability shifting from "current + $24K" to "current + $27K" do you need to make up for that? What the coherence proofs for expected utility show, and the point of the Allais paradox, is that the invariant measure of distance between probabilities for this purpose is the usual measure between 0 and 1. That is, the distance between ~0 and 0.01, or 0.33 and 0.34, or 0.99 and ~1, are all the same distance.
You've got to use the right probability metric to preserve the right invariance relative to the right transformation.
Otherwise, shifting you in time (by giving you more information, for example about the roll of a die) will shift your perceived distances, and your preferences will switch, turning you into a money pump.
Neat, huh?
Okay, just one more question, Eliezer: when are you going to sit down and condense your work at Overcoming Bias into a reasonably compact New York Times bestseller?
The key word is compact. It's a funny thing, but I have to write all these extra details on the blog before I can leave them out of the book. Otherwise, they'll burst out into the text and get in the way.
So the answer is "not yet" - there are still too many things I would be tempted to say in the book, if I didn't say them here.
"What the coherence proofs for expected utility show, and the point of the Allais paradox, is that the invariant measure of distance between probabilities for this purpose is the usual measure between 0 and 1. That is, the distance between ~0 and 0.01, or 0.33 and 0.34, or 0.99 and ~1, are all the same distance."
In this example. If it had been the difference between .99 and 1, rather than 33/34 and 1, then under normal utility of money functions, it would be reasonable to prefer A in the one case and B in the other. But that difference can't be duplicated by the money pump you choose. The ratios of probability are what matter for this. 33/34 to 1 is the same ratio as .33 to .34.
So it turns out that log odds is the right answer here also. If the difference in the log odds is the same, then the bet is essentially the same.
James Bach, your point and EY's are not incompatible : it is a given that what you care about and give importance to is subjective and irrational, however having chosen what outcomes you care about, your best road to achieving them must be Bayesian.... perhaps. My problem with this whole Bayesian kick is that it reminds me of putting three masts and a full set of square-rigged sails on what is basically a canoe : the masts and sails are the Bayesian edifice, the canoe is our useful knowledge in any given real life situation.
tcpkac - that's what they said to Columbus.
The circular money pump kept bringing M C Escher illustrations to my mind - the never-ending staircase in particular. This post cleared up a lot of what I didn't take in yesterday - thanks for taking the time.
There is no special rational basis for claiming that when lives are at stake, it's especially important to be rational.
James - the reason 'lives at stake' comes up in examples is because the value we place on human life tends to dwarf everything else. Just because that value is enormous, doesn't mean it's uncalculable. Considering lives is the best way to force ourselves to think as economically as we can - more so than money (many people are rich already). It may give us a pang of cold, inhuman logicality to sit down with a piece of paper and a pen to work out the best way to save lives, but that's the game.
Just a quick correction-
Experimental subjects Experimenters tend to defend attack incoherent preferences even when they're really silly strongly held.
;->
I guess what I'd like to know is whether you are a) trying to figure out what people do, or b) trying to predict outcomes and then tell people what to do? Despite my slightly snarky tone, as a curious outsider I really am curious as to which you take to be your goal. Coming from a science-y background, I can totally understand b), but life has shown me plenty of instances of people acing contrary to b)'s predictions.
Maybe the reason we tend to choose bet 2 over bet 1 (before computing the actual expected winnings) is not the higher probability to win, but the smaller sum we can lose (either we expect to lose or we can lose at worst, I'm not sure about that). So the bias here could be more something along the lines of status quo bias or endownment effect than a need for certainty.
I can only speak for myself, but I do not intuitively value certainty/high probability of winning, while I am biased towards avoiding losses.
tcpkac - that's what they said to Columbus.
Columbus was an idiot who screwed up his reasoning, didn't find what he was looking for, was saved only by the existence of something entirely different, and died without ever figuring out that what he found wasn't what he sought.
Hear, hear!
Although I understand he had his coat of arms, which featured some islands, updated to feature a continent, which may suggest he figured it out at some point. Didn't do him much good though - you left out the bit where he tortured the natives on the island he was governor of when they couldn't give him and got himself fired and shipped back to Spain.
I don't think the possibility of a money-pump is always a knock-down reductio. It really only makes my preferences seem foolish in the long-run. But there isn't a long run here: it's a once-in-a-lifetime deal. If you told me that you would make me the same offer to me thousands of time, I would of course do the clean math that you suggest.
Suppose you are deathly thirsty, have only $1 in your pocket, and find yourself facing two bottled-water machines: The first would dispense a bottle with certainty for the full dollar, and the second would do so with a probability and price such that "clean math" suggests it is the slightly more rational choice. Etc.
The rational choice would be the one that results in the highest expected utility. In this case, it wouldn't necessarily be the one with the highest expected amount of water. This is because the first bottle of water is worth far more then the second.
The amount of money you make over your lifetime dwarfs the amount you make in these examples. The expected utility of the money isn't going to change much.
It seems hard to believe that the option of going from B to C and then from C to A would change whether or not it's a good idea. After all, you can always go from A to B and then refuse to change. Then there'd be no long run. Of course, once you've done that, you might as well go from B to C and stop there, etc.
On second thought, strike my second paragraph (in the 10:08 am comment).
I shouldn't have tinkered with the (well-known?) example I first heard. It's a coke machine on a military base that introduced a price increase by dispensing cokes with a lower probability rather than with a nominal price increase. To the soldiers that live and work there, the machine is equivalent to one with a nominal price increase. The philosopher's question is whether this machine is fair to someone who is just passing through, someone for whom there is no long run.
What I mean to suggest by the example is that the chance deals you offer people do not have a long-run, and your scheme of rational choice is hard to justify without a long-run, no? Can you say something about this, Eliezer?
To the soldiers that live and work there, the machine is equivalent to one with a nominal price increase.
No, it isn't. It's possible that random happenstance could deny some people more cans than others. The set of outcomes only becomes equivalent to raising the price as the number of uses increases to infinity - and that's never assumable in the real world.
It's not fair to the people that live and work there, either.
"Not yet"
So it's going to happen eventually! Yay!
Back on topic, I second Lee's thoughts. My ability to do a simple expected utility calculation would prevent me from ever taking option (1) given a choice between the two, but if I badly needed the $4 for something I might take it. (Initially hard to think of how such a scenario could arise, but $4 might be enough to contact someone who would be willing to bail me out of whatever trouble I was in.)
Dagon made a point about the social importance of guarantees. If a promise is broken, you know you have been cheated. If you are persuaded that there is only a 10% chance of losing your investment and you are unlucky, what do you know?
I doubt that we can over come our biases by focusing on what they are bad for and how they hurt us. We also need to think about what they are good for and why we have them.
Long run? What? Which exactly equivalent random events are you going to experience more than once? And if the events are only really close to equivalent, how do you justify saying that 30 one-time shots at completely different ways of gaining 1 utility unit is a fundamentally different thing than a nearly-exactly-repeated game where you have 30 chances to gain 1 utility unit each time?
There is nothing irrational about choosing 1A over 1B or choosing 2B over 2A. Combining the two into a single scheme or an iterated choice are totally different situations from the original proposition.
Too much research on cognition, especially biases, tends to infer too much from simplified experiments. True, in this case many people slip into a money pump situation easily, but the original proposition does not require that to occur.
Context is king.
This sort of dilemma depends on context. Some may have been cheated in the past, so certainty is valuable to them. Others may need exactly $24,000, and others may need exactly $27,000 for a larger (higher utility) purpose. Others may have different risk tolerance.
You may argue that, given only this decision and no outside influences, a person would be irrational to choose a particular way. Unfortunately, you will never find a reasoning being without context.
This is exactly the Bayesian way. Previous experience defines what is currently rational. Later experience may show the earlier actions to have been imperfect, or unwise to repeat. But to say that we are irrational because we are basing our decision on our own personal context is to deny everything that you have built up to this point.
Context is everything following, "E(X given ". Do not deny the value of it by asserting that, in one specific instance, it mislead us. We may learn from additional data, but it did not mislead us.
I see the importance of context generally overlooked in this discussion, as in most discussion of rationality (encompassing discussion of rational methods.) The elegance and applicability of Bayesian inference is to me undeniable, but I look forward to broader discussion of its application within systems of effective decision-making entailing prediction within a context which is not only uncertain but evolving. In other words, consideration of principles of effective agency where the game itself is inherently uncertain.
In doing so, I think we are driven to re-frame our thinking away from goals to be achieved and onto values to be promoted, away from rules to be followed and onto increasingly coherent principles to be applied, and away from maximizing expected utility and onto maximizing potential synergies within a game that is consistent but inherently incomplete. I see Bayes as necessary but not sufficient for this more encompassing view.
BusinessConsultant: But to say that we are irrational because we are basing our decision on our own personal context is to deny everything that you have built up to this point. Really? If a decision is irrational, it's irrational. You can make allowances for circumstance and still attempt to find the most rational choice. Did you read the whole post? Eliezer is at pains to point out that even given different expected utilities for different amounts of money for different people in different circumstances, there is still a rational way to go about making a decision and there is stilla tendency for humans to make bad decisions because they are too lazy (my words, not his) to think it through, instead trusting their "intuition" because it "feels right."
The point about paying two human lives to flip the switch and then switch it back really drove home the point, Eliezer. Also, a good clarification on consistency. Reading the earlier post, I also thought of the objection that $24,000 could change a destitute person's life by orders of magnitude, whereas $3000 on top of that would not be equivalent to 1/8 more utility... the crucial difference for a starving, sick person is in, say, the first few grand.
But then, as you point out, your preference for the surer chance of less money should remain consistent however the game is stated. Thanks! Very clear...
Also, living in New York and longing for Seattle, I found myself visiting Seattle for Christmas and longing for New York... hmmm. Maybe I just need a taxi to Oakland. :P
"A number of commenters, yesterday, claimed that the preference pattern wasn't irrational because of "the utility of certainty", or something like that. One commenter even wrote U(Certainty) into an expected utility equation."
It was not my intent to claim "the preference pattern wasn't irrational," merely that your algebraic modeling failed to capture what many could initially claim was a salient detail of the original problem. I hope a reread of my original comment will find it pleading, apologetic, limited to the algebraic construction, and sincere.
I should have mentioned that I thought the algebraic modeling was a very elegant way to show that the diminishing marginal utility of money was not at play. If that was its only purpose, then the rest of this is unnecessary, but I think you can use that construction to do more, with a little work.
Here's one possible response to this apparent weakness in the algebraic modeling:
If you can simply assert that Allais's point holds experimentally for arbitrarily increasing values in place of $24k and $27k (which I'm sure you can), then we find this proposed "utility of certainty" (or whatever more appropriate formulation you prefer*) increasing with no upper bound. The notion that we value certainty seems to hold intuitive appeal, and I see nothing wrong with that on its face. But the notion that we value certainty above all else is more starkly implausible (and I would suspect demonstrably untrue: would you really give your life just to become certain of the outcome of a coinflip?).
I was trying to make the argument stronger, not weaker, but I get the impression I've somehow pissed all over it. My apologies.
*I've read your post on Terminal Values three times and haven't yet grokked why I can't feed things like knowledge or certainty into a Utility function. Certainty seems like a "fixed, particular state of the world," it seems like an "outcome," not a "action," and most definitely unlike "1." If the worry is that certainty is an instrumental value, not a terminal value, why couldn't one make the same objection of the $24,000? Money has no inherent value, it is valuable only because it can be spent on things like chocolate pizza. You've since replaced the money with lives, but was the original use of money an error? I suspect not... but then what is the precise problem with U(Certainty)?
I should clarify that, once again, I bring up these objections not to show where you've gone wrong, but to show where I'm having difficulties in understanding. I hope you'll consider these comments a useful guide as to where you might go more slowly in your arguments for the benefit of your readers (like myself) who are a bit dull, and I hope you do not read these comments as combative, or deserving of some kind of excoriating reply.
I'll keep going over the Terminal Values post to see if I can get it to click.
There is a certain U(certainty) in a game, although there might be better ways to express it mathematically. How do you know the person hosting the game isn't lying to you an really operating under the algorithm: 1A. Give him $24,000 because I have no choice. 1B. Tell him he had a chance to win but lost and give nothing.
In the second situation(2A 2B) both options are probabilities and so the player has no choice but to trust the game host.
Also, I am still fuzzy on the whole "money pump" concept. "The naive preference pattern on the Allais Paradox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B because you'd rather have a 33% chance of winning $27,000 than a 34% chance of winning $24,000."
Ok, I pay you one penny. You might be tricking me out of one penny(in case you already decided to give me nothing) but I'm willing to take that risk.
"Then a die roll eliminates a chunk of the probability mass. In both cases you had at least a 66% chance of winning nothing. This die roll eliminates that 66%. So now option B is a 33/34 chance of winning $27,000, but option A is a certainty of winning $24,000. Oh, glorious certainty! So you pay me to throw the switch back from B to A."
Yes yes yes, I pay you 1 penny. You now owe me $24,000. What? You want to somehow go back to a 2A 2B situation again? No thanx. I would like to get my money now. Once you promised me money with certainty you cannot inject uncertainty back into the game without breaking the rules.
I'm afraid there might still be some inferential distance to cover Eliezer.
How do the commenters who justify the usual decisions in the face of certainty and uncertainty with respect to gain and loss account for this part of the post?
There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they'll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they'll usually take the chance of losing $200. Same probability distribution over outcomes, different descriptions, different choices.
Assuming that this experiment has actually been validated, there's hardly a clearer example of obvious bias than a person's decision on the exact same circumstance being determined by whether it's described as certain vs. uncertain gain or certain vs. uncertain loss.
And Eliezer, I have to compliment your writing skills: when faced with people positing a utility of certainty, the first thing that came to my mind was the irrational scale invariance such a concept must have if it fulfills the stated role. But if you'd just stated that, people would have argued to Judgment Day on nuances of the idea, trying to salvage it. Instead, you undercut the counterargument with a concrete reductio ad absurdum, replacing $24,000 with 24,000 lives- which you realized would make your interlocutors uncomfortable about making an incorrect decision for the sake of a state of mind. You seem to have applied a vital principle: we generally change our minds not when a good argument is presented to us, but when it makes us uncomfortable by showing how our existing intuitions conflict.
If and when you publish a book, if the writing is of this quality, I'll recommend it to the heavens.
there's hardly a clearer example of obvious bias than a person's decision on the exact same circumstance being determined by whether it's described as certain vs. uncertain gain or certain vs. uncertain loss.
But it's not the exact same circumstance. You are ignoring the fundamental difference between the two conditions.
But it's not the exact same circumstance. You are ignoring the fundamental difference between the two conditions.
Show us. Use maths.
Ben Jones, and Patrick (orthonormal), if you offer me 400$ I'll say 'yes, thank you'. If you offer me 500$ I'll say 'yes, thank you'. If, from whatever my current position is after you've been so generous, you ask me to choose between "a certain loss of $100 or a 20% chance of losing $200", I'll choose the 20% chance of losing 200$. That's my math, and I accept money orders, wire transfers, or cash....
You come out with the same amount of money, but a different thing happens to get you there. This matters emotionally, even though it shouldn't (or seems like it shouldn't). A utility function can take things other than money into account, you know.
Show us. Use maths.
The distinction already exists in the natural language used to describe the two scenarios.
In one scenario, we are told that a certain amount of money will become ours, but we do not yet possess it. In the other, we consider ourselves to already possess the money and are given opportunities to risk some of it.
Hypothetical money is not treated as equivalent to possessed money. (Well, hypothetical hypothetical vs. possessed hypothetical in the experiment discussed, but you know what I mean.)
This matters emotionally, even though it shouldn't (or seems like it shouldn't).
Hypothetical money is not treated as equivalent to possessed money.
My point exactly. It's perfectly understandable that we've evolved a "bird in the hand/two in the bush" heuristic, because it makes for good decisions in many common contexts; but that doesn't prevent it from leading to bad decisions in other contexts. And we should try to overcome it in situations where the actual outcome is of great value to us.
A utility function can take things other than money into account, you know.
As well it should. But how large should you set the utilities of psychology that make you treat two descriptions of the same set of outcomes differently? Large enough to account for a difference of $100 in expected value? $10,000? 10,000 lives?
At some point, you have to stop relying on that heuristic and do the math if you care about making the right decision.
But how large should you set the utilities of psychology that make you treat two descriptions of the same set of outcomes differently?
As far as I'm concerned, zero; we agree on this. My point was only that it's misleading to say "the same set of outcomes" or "the same circumstance" for the same amount of money; a different thing happens to get to the same monetary endpoint. It's not a difference that I (or my idealized self) care(s) about, though.
Similarly, I think it's misleading to say "choosing 1A and 2B is irrational" without adding the caveat "if utility is solely a function of money, not how you got that money".
"Doing the math" requires that we accept a particular model of utility and value, though. And this is why people are objecting to Eliezer's claims - he is implicitly applying one model, and then acting as though no assumption was made.
"There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they'll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they'll usually take the chance of losing $200. Same probability distribution over outcomes, different descriptions, different choices."
Ok lets represent this more clearly. a1 - 100% chance to win $400 a2 - 80% chance to win $500 and 20% chance to win $300
b1 - 100% chance to win $500 and 100% chance to lose $100 b2 - 100% chance to win $500 and 20% chance to lose 200%
Lets write it out using utility functions.
a1 - 100%U[$400] a2 - 80%U[$500] + 20%*U[$300]
b1 - 100%U[$500] + 100%U[-$100]? b2 - 100%U[$500] + 20%U[-200%}?
Wait a minute. The probabilities don't add up to one. Maybe I haven't phrased the description correctly. Lets try that again.
b1 - 100% chance to both win $500 and lose $100 b2 - 20% chance both win $500 and to lose $200, leaving an 80% chance to win $500 and lose $0
b1 - 100%U[$500 - $100] = 100%U[$400] b2 - 20%U[$500-$200] + 80%[$500-$0] = 80%U[$500] + 20%U[$300]
This is exactly the same thing as a1 and a2. More importantly however is that the $500 is just a value used to calculate what to plug into the utility function. The $500 by itself has no probability coefficient and therefore it's 'certainty' is irrelevant to the problem at hand. It's a trick using clever wordplay to make one believe there is a 'certainty' when none is there. It's not the same as the Allais paradox.
As for the Allais paradox, I'll have to take another look at it later today.
Eliezer, You need to specify if it's a one-time choice or if it will be repeated. You need to specify if lives or dollars are at stake. These things matter.
I think there are some places where it is rational to take this kind of bet the less-expected-value way for a greater probability. Say you're walking along the street in tears because mobsters are going to burn down your house and kill your family if you don't pay back the $20,000 you owe them and you don't have the cash. Then some random billionaire comes along and offers you either A. $25,000 with probability 1 or B. $75,000 with probability 50%. By naive multiplication, you should take the second bet, but here there's a high additional cost of failure which you might well want to avoid with high probability. (It becomes a decision about the utilities of not paying the mob vs. having X additional money to send your kid to college afterwards. This has its own tipping point; but there's a rational case to be made for taking A over B.)
This is why you should use expected utility calculations. The utility of $20,000 also contains the utility of saving your family's lives (say $1,650,000) and retaining a house ($300,000), so choosing between 100% chance of $1,975,000 or 50% chance of $2,025,000 is much easier.
Maybe I'm missing something obvious, but doesn't diminishing marginal utility play a big role here? After all, almost all of us would prefer $1,000,000 with certainty to $2,000,100 with 50% probability, and it would be perfectly rational to do so -- not because of the "utility of certainty," but because $2 million isn't quite twice as good as $1 million (for most people). But if you offered us this same choice a thousand times, we would probably then take the $20,000,100, because the many coin flips would reduce the variance enough to create a higher expected utility, even with diminishing marginal returns. (If the math doesn't quite seem to work out, you could probably work out numbers that would.)
So it seems at least plausible that you could construct versions of the money pump problem where you could rationally prefer bid A to bid B in a one-off shot, but where you would then change your preference to bid B if offered multiple times. Obviously I'm not saying that's what's really going on -- Allais paradox surely does demonstrate a real and problematic inconsistency. But we shouldn't conclude from that it's always rational to just "shut up and multiply," at least when we're talking about anything other than "raw" utility.
"I can cause you to invert your preferences over time and pump some money out of you."
I think the small qualifier you slipped in there, "over time", is more salient than it appears at first.
Like most casually intuitive humans, I'll prefer 1A over 1B, and (for the sake of this argument) 2B over 2A, and you can pump some money out of me for a bit.
But... as a somewhat rational thinker, you won't be able to pump an unbounded amount of money out of me. Eventually I catch on to what you're doing and your trickle of cents will disappear. I will go, "well, I don't know what's wrong with my feeble intuition, but I can tell that Elizer is going to end up with all my money this way, so I'll stop even though it goes against my intuition." If you want to accelerate this, make the stuff worth more than a cent. Tell someone that the "mathematically wrong choice will cost you $1,000,000", and I bet they'll take some time to think and choose a set of beliefs that can't be money-pumped.
Or, change the time aspect. I suspect if I were immortal (or at least believed myself to be), I would happily choose 1B over 1A, and certainty be screwed. Maybe I don't get the money, so what, I have an infinite amount of time to earn it back. It's the fact that I don't get to play the game an unlimited amount of times that makes certainty a more valuable aspect.
This appears to be (to my limited knowledge of what science knows a well-known bias. But like most biases, I think I can imagine occasions when it serves as a heuristic.
The thought occurred to me because I play miniature and card games--I see other commenters have also mentioned some games.
Let's say, for example, I have a pair of cards that both give me X of something--let's it deals a certain amount of damage, for those familiar with these games. One card gives me 4 of that something. The other gives me 1-8 over a uniform random distribution--maybe a die roll.
Experience players of these games will tell you that unless the random card gives you a higher expected value, you should play the certain card. And empirical evidence would seem to suggest that they know what they're talking about, because these are the players who win games. What do they say if you ask them why? They say you can plan around the certain gain.
I think that notion is important here. If I have a gain that is certain, at least in any of these games, I can exploit it to its fullest potential--for a high final utility. I can lure my opponent into a trap because I know I can beat them, I can make an aggressive move that only works if I deal at least four damage--heck, the mere ability to trim down my informal Minimax tree is no small gain in a situation like this.
Dealing 4 damage without exploiting it has a much smaller end payoff. And sure, I could try to exploit the random effect in just the same way--I'll get the same effect if I win my roll. But if I TRY to exploit that gain and FAIL, I'll be punished severely. If you add in these values it skews the decision matrix quite a bit.
And none of this is to say that the gambling outcomes being used as examples above aren't what they seem to be. But I'm wondering if humans are bad at these decisions partly because the ancestral environment contained many examples of situations like the one I've described. Trying to exploit a hunting technique that MIGHT work could get you eaten by a bear--a high negative utility hidden in that matrix. And this could lead, after natural selection, to humans who account for such 'hidden' downsides even when they don't exist.
I agree that in many examples, like simple risk/reward decisions shown here, certainty does not give an option higher utility. However, there are situations in which it might be advantageous to make a decision that has a worse expected outcome, but is more certain. The example that comes to mind is complex plans that involve many decisions which affect each other. There is a computational cost associated with uncertainty, in multiple possible outcomes must be considered in the plan; the plan "branches." Certainty simplifies things. As an agent with limited computing power in a situation where there is a cost associated with spending time on planning, this might be significant.
And the fact that situations like that occurred in humanity's evolution explains why humans have the preference for certainty that they do.
I should have read this post before replying on the last I suppose! Things are a little more clear.
Hmm... well I had more written but for brevity's sake: I suppose my preference system looks more like 1A>1B, 2A=2B. I don't really have a strong preference for an extra 1% vs an extra $3k either way.
The pump really only functions when it is repeated plays; however in that case I'd take 1B instead of 1A.
A pump which only pumps once isn't much of a pump. In order to run the Vegas money pump (three intransitive bets) you need only offer hypothetical alternatives, because the gambler's preferences are inconsistent. You can do this forever. But to run the Allais money pump, you're changing the preferred bet by making one outcome certain. So to do it again, you'd need to reset the scenario by removing the certainty somehow. The gambler would oppose this, so their preferences are consistent. And I think it might be helpful to phrase it as avoidance of regret, rather than valuing certainty. People have a powerful aversion to the anticipated regret of coming away with nothing from a scenario where they could have gained substantially. There's also interactions here with loss aversion for the lives formulation: dollars gained vs lives lost. Writing this comment has given me a new appreciation for the difficulties of clearly and concisely explaining things. I've rewritten it a few times and I'm still not happy but I'm posting anyway because of thematically appropriate loss aversion.
Fun fact: "Zut Allais!" is a play on words with respect to the French zut alors, which roughly translates to "darn it".
Potentially a way in which this heuristic makes sense in real world is if the utility of 0$ was negative. If I were an banana seller then if I sold nothing, when I got my next shipment of bananas I wouldn't have enough space in my warehouse and I'd have to throw out some bananas. In this case, I have to take out an insurance policy against 0$ in all cases except certainty. This will hold even if insurance costs are proportional to probability of 0$ if there were fixed transaction costs to buying insurance.
While this is a mathematically correct argument, it implies a real world relationship that does not hold in all cases. To illustrate while a platonic agent would be correct to select 1b let's instead imagine an impoverished Somalian living on a few dollars a day. 24,000 dollars would be a life-changing amount of money. The increased utility of $3000 additional dollars would in no way make up for a 1/34 chance of losing such an opportunity.
Conversely imagine a billionaire in the same situation. For him the optimal choice is neither 1a or 1b instead he should turn around and leave instead of wasting his time on such a small sum of money. For a middle class American 1b would indeed be the correct choice. So you see naive rationality can be very irrational.
For another example consider the lottery. Occasionally the pot increases such that you're expected return is more than one dollar per dollar spent. In such a case you should still not buy lottery tickets, because your chance of winning back your money is still tiny. The increased size of the pot does not provide much more utility than its normal size for most lottery players. Of course if you possess a lot of capital and there are no rules against it you might make a return with a large investment but such possibilities are normally blocked.
My examples are purposefully extreme, but they work to illustrate that you can't just translate mathematical probabilities directly into real life and call that rational. Real conditions have to be taken into account.
I would always choose the highest expected value if the bet can be repeated and I can withstand the losses. But I can rationalize the preference for certainty in a couple of ways and these might be the same psychology. A: once you said 100%. I treat it as my $24000 and loss aversion might prevent me from betting it to win $3000 despite the favorable odds. B: I don't want to live with the regret of not winning $24000 even if it only happens 3% of the time, whereas I can't really tell the difference 1% makes thus I can't attribute any regret to my choice between 2a and 2b.
Huh! I was not expecting that response. Looks like I ran into an inferential distance.
It probably helps in interpreting the Allais Paradox to have absorbed more of the gestalt of the field of heuristics and biases, such as:
Let's start with the issue of incoherent preferences - preference reversals, dynamic inconsistency, money pumps, that sort of thing.
Anyone who knows a little prospect theory will have no trouble constructing cases where people say they would prefer to play gamble A rather than gamble B; but when you ask them to price the gambles they put a higher value on gamble B than gamble A. There are different perceptual features that become salient when you ask "Which do you prefer?" in a direct comparison, and "How much would you pay?" with a single item.
My books are packed up for the move, but from what I remember, this should typically generate a preference reversal:
Most people will (IIRC) rather play 2 than 1. But if you ask them to price the bets separately - ask for a price at which they would be indifferent between having that amount of money, and having a chance to play the gamble - people will (IIRC) put a higher price on 1 than on 2. If I'm wrong about this exact example, nonetheless, there are plenty of cases where such a pattern is exhibited experimentally.
So first you sell them a chance to play bet 1, at their stated price. Then you offer to trade bet 1 for bet 2. Then you buy bet 2 back from them, at their stated price. Then you do it again. Hence the phrase, "money pump".
Or to paraphrase Steve Omohundro: If you would rather be in Oakland than San Francisco, and you would rather be in San Jose than Oakland, and you would rather be in San Francisco than San Jose, you're going to spend an awful lot of money on taxi rides.
Amazingly, people defend these preference patterns. Some subjects abandon them after the money-pump effect is pointed out - revise their price or revise their preference - but some subjects defend them.
On one occasion, gamblers in Las Vegas played these kinds of bets for real money, using a roulette wheel. And afterward, one of the researchers tried to explain the problem with the incoherence between their pricing and their choices. From the transcript:
You want to scream, "Just give up already! Intuition isn't always right!"
And then there's the business of the strange value that people attach to certainty. Again, I don't have my books, but I believe that one experiment showed that a shift from 100% probability to 99% probability weighed larger in people's minds than a shift from 80% probability to 20% probability.
The problem with attaching a huge extra value to certainty is that one time's certainty is another time's probability.
Yesterday I talked about the Allais Paradox:
The naive preference pattern on the Allais Paradox is 1A > 1B and 2B > 2A. Then you will pay me to throw a switch from A to B because you'd rather have a 33% chance of winning $27,000 than a 34% chance of winning $24,000. Then a die roll eliminates a chunk of the probability mass. In both cases you had at least a 66% chance of winning nothing. This die roll eliminates that 66%. So now option B is a 33/34 chance of winning $27,000, but option A is a certainty of winning $24,000. Oh, glorious certainty! So you pay me to throw the switch back from B to A.
Now, if I've told you in advance that I'm going to do all that, do you really want to pay me to throw the switch, and then pay me to throw it back? Or would you prefer to reconsider?
Whenever you try to price a probability shift from 24% to 23% as being less important than a shift from ~1 to 99% - every time you try to make an increment of probability have more value when it's near an end of the scale - you open yourself up to this kind of exploitation. I can always set up a chain of events that eliminates the probability mass, a bit at a time, until you're left with "certainty" that flips your preferences. One time's certainty is another time's uncertainty, and if you insist on treating the distance from ~1 to 0.99 as special, I can cause you to invert your preferences over time and pump some money out of you.
Can I persuade you, perhaps, that this is an irrational pattern?
Surely, if you've been reading this blog for a while, you realize that you - the very system and process that reads these very words - are a flawed piece of machinery. Your intuitions are not giving you direct, veridical information about good choices. If you don't believe that, there are some gambling games I'd like to play with you.
There are various other games you can also play with certainty effects. For example, if you offer someone a certainty of $400, or an 80% probability of $500 and a 20% probability of $300, they'll usually take the $400. But if you ask people to imagine themselves $500 richer, and ask if they would prefer a certain loss of $100 or a 20% chance of losing $200, they'll usually take the chance of losing $200. Same probability distribution over outcomes, different descriptions, different choices.
Yes, Virginia, you really should try to multiply the utility of outcomes by their probability. You really should. Don't be embarrassed to use clean math.
In the Allais paradox, figure out whether 1 unit of the difference between getting $24,000 and getting nothing, outweighs 33 units of the difference between getting $24,000 and $27,000. If it does, prefer 1A to 1B and 2A to 2B. If the 33 units outweigh the 1 unit, prefer 1B to 1A and 2B to 2A. As for calculating the utility of money, I would suggest using an approximation that assumes money is logarithmic in utility. If you've got plenty of money already, pick B. If $24,000 would double your existing assets, pick A. Case 2 or case 1, makes no difference. Oh, and be sure to assess the utility of total asset values - the utility of final outcome states of the world - not changes in assets, or you'll end up inconsistent again.
A number of commenters, yesterday, claimed that the preference pattern wasn't irrational because of "the utility of certainty", or something like that. One commenter even wrote U(Certainty) into an expected utility equation.
Does anyone remember that whole business about expected utility and utility being of fundamentally different types? Utilities are over outcomes. They are values you attach to particular, solid states of the world. You cannot feed a probability of 1 into a utility function. It makes no sense.
And before you sniff, "Hmph... you just want the math to be neat and tidy," remember that, in this case, the price of departing the Bayesian Way was paying someone to throw a switch and then throw it back.
But what about that solid, warm feeling of reassurance? Isn't that a utility?
That's being human. Humans are not expected utility maximizers. Whether you want to relax and have fun, or pay some extra money for a feeling of certainty, depends on whether you care more about satisfying your intuitions or actually achieving the goal.
If you're gambling at Las Vegas for fun, then by all means, don't think about the expected utility - you're going to lose money anyway.
But what if it were 24,000 lives at stake, instead of $24,000? The certainty effect is even stronger over human lives. Will you pay one human life to throw the switch, and another to switch it back?
Tolerating preference reversals makes a mockery of claims to optimization. If you drive from San Jose to San Francisco to Oakland to San Jose, over and over again, then you may get a lot of warm fuzzy feelings out of it, but you can't be interpreted as having a destination - as trying to go somewhere.
When you have circular preferences, you're not steering the future - just running in circles. If you enjoy running for its own sake, then fine. But if you have a goal - something you're trying to actually accomplish - a preference reversal reveals a big problem. At least one of the choices you're making must not be working to actually optimize the future in any coherent sense.
If what you care about is the warm fuzzy feeling of certainty, then fine. If someone's life is at stake, then you had best realize that your intuitions are a greasy lens through which to see the world. Your feelings are not providing you with direct, veridical information about strategic consequences - it feels that way, but they're not. Warm fuzzies can lead you far astray.
There are mathematical laws governing efficient strategies for steering the future. When something truly important is at stake - something more important than your feelings of happiness about the decision - then you should care about the math, if you truly care at all.