Continuation ofThe Allais Paradox, Zut Allais!

Judging by the comments on Zut Allais, I failed to emphasize the points that needed emphasis.

The problem with the Allais Paradox is the incoherent pattern 1A > 1B, 2B > 2A.  If you need $24,000 for a lifesaving operation and an extra $3,000 won't help that much, then you choose 1A > 1B and 2A > 2B.  If you have a million dollars in the bank account and your utility curve doesn't change much with an extra $25,000 or so, then you should choose 1B > 1A and 2B > 2A.  Neither the individual choice 1A > 1B, nor the individual choice 2B > 2A, are of themselves irrational.  It's the combination that's the problem.

Expected utility is not expected dollars.  In the case above, the utility-distance from $24,000 to $27,000 is a tiny fraction of the distance from $21,000 to $24,000.  So, as stated, you should choose 1A > 1B and 2A > 2B, a quite coherent combination.  The Allais Paradox has nothing to do with believing that every added dollar is equally useful.  That idea has been rejected since the dawn of decision theory.

If satisfying your intuitions is more important to you than money, do whatever the heck you want.  Drop the money over Niagara falls.  Blow it all on expensive champagne.  Set fire to your hair.  Whatever.  If the largest utility you care about is the utility of feeling good about your decision, then any decision that feels good is the right one.  If you say that different trajectories to the same outcome "matter emotionally", then you're attaching an inherent utility to conforming to the brain's native method of optimization, whether or not it actually optimizes.  Heck, running around in circles from preference reversals could feel really good too.  But if you care enough about the stakes that winning is more important than your brain's good feelings about an intuition-conforming strategy, then use decision theory.

If you suppose the problem is different from the one presented - that the gambles are untrustworthy and that, after this mistrust is taken into account, the payoff probabilities are not as described - then, obviously, you can make the answer anything you want.

Let's say you're dying of thirst, you only have $1.00, and you have to choose between a vending machine that dispenses a drink with certainty for $0.90, versus spending $0.75 on a vending machine that dispenses a drink with 99% probability.  Here, the 1% chance of dying is worth more to you than $0.15, so you would pay the extra fifteen cents.  You would also pay the extra fifteen cents if the two vending machines dispensed drinks with 75% probability and 74% probability respectively.  The 1% probability is worth the same amount whether or not it's the last increment towards certainty.  This pattern of decisions is perfectly coherent.  Don't confuse being rational with being shortsighted or greedy.

Added:  A 50% probability of $30K and a 50% probability of $20K, is not the same as a 50% probability of $26K and a 50% probability of $24K.  If your utility is logarithmic in money (the standard assumption) then you will definitely prefer the latter to the former:  0.5 log(30) + 0.5 log(20)  <  0.5 log(26) + 0.5 log(24).  You take the expectation of the utility of the money, not the utility of the expectation of the money.

New Comment
38 comments, sorted by Click to highlight new comments since:

Seemed reasonable to me.

Of course, maybe you oughtn't try to convinve people too hard of this... instead take their pennies. ;)

Oh, a bit off topic, but on the subject of coherence/dutch book/vulnurability arguments, I like them because:

  1. Depending on formulation, they'll give you epistemic probability and decision theory all at once.

  2. Has a "mathematical karma" flavor. ie "no, you're not good or evil or anything for listening or ignoring this. Simply that there're natural mathematical consequences if you don't organize your decisions and beliefs in terms of these principles." Just a bit of a different flavor than other types of math I've seen. And I like saying "mathematical karma." :)

  3. The arguments of these sorts that I've seen don't seem to demand ever much more than linear algeabra. Cox's theorem involes somewhat tougher math and the derivations are a bit longer. It's useful to know that it's there, but coherence arguments seem to be mathematically, well, "cleaner" and also more intuitive, at least to me.

(sighs)

If you actually had to explain all of this to Overcoming Bias readers, I shudder to think of how some publishing bureaucrat would react to a book on rationality. "What do you mean, humans aren't rational? Haven't you ever heard of Adam Smith?"

Do I detect a hint of irritation? ;-)

I have a question though. Are you able to use probability math in all your own decisions - even quick, casual ones? Are you able to "feel" the Bayesian answer?

I suppose what I'm groping towards here is: can native intuitions be replaced with equally fast, but accurate ones? It would seem a waste to have to run calculations in the slowest part of our brains.

Julian,

When hundreds or thousands of dollars are at stake, e.g. in Eliezer's example, or when setting a long-term policy (a key point) for yourself about whether to buy expensive store warranties for personal electronics, taking a couple of minutes to work out the math will have a fantastic cost:benefit ratio. If you're making decisions about investments or medical care the stakes will be much higher. People do in fact go to absurd lengths to avoid even simple mental arithmetic, but you can't justify the behavior based on the time costs of calculation.

I think psy-kosh's "karma" idea is worth considering, but your rhetoric is much better here than the previous two attempts, as far as I'm concerned. It's important -- especially for a lay audience like me that doesn't already know what kind of argument you're trying to make -- to distinguish between contingent advice and absolute imperatives. (It may be that the second category can properly never be demonstrated, but a lot of people make those kinds of claims anyway, so it's a poor interpretive strategy to assume that that's not what people are saying.)

"Let's say you're dying of thirst, you only have $1.00, and you have to choose between a vending machine that dispenses a drink with certainty for $0.90, versus spending $0.75 on a vending machine that dispenses a drink with 99% probability. Here, the 1% chance of dying is worth more to you than $0.15, so you would pay the extra fifteen cents. You would also pay the extra fifteen cents if the two vending machines dispensed drinks with 75% probability and 74% probability respectively. The 1% probability is worth the same amount whether or not it's the last increment towards certainty."

OK, the benefit of a 1% chance of surviving with $0.10 in my pocket is the same regardless of whether I move from 99% to 100% or from 74% to 75%. However, the costs differ: in the first case I lose (U($0.25)-U($0.10))0.99, while for the second I lose (U($0.25)-U($0.10))0.74.

I also noticed that and was wondering how many comments it would take before somebody nitpicks this fairly trivial point.

It was 6 ;)

@ Carl Shulman

I avoid mental arithmetic because I tend to drop decimals, misremember the rules of algebra, and other heinous sins. It's on my list to get better at, but right now I can't trust even simple sums I do in my head.

Julian,

What about cell phone and pocket calculators? Microsoft Excel can let you organize your data nicely for expected value and net present value calculations for internet purchases, decisions on insurance, etc. There's no shame in using arithmetic aids, just as we use supplementary memory to remember telephone numbers and the like.

"I avoid mental arithmetic because I tend to drop decimals, misremember the rules of algebra, and other heinous sins"

Fast and accurate arithmetic can trained with any number of software packages. Why not give it a try? I did and it worked for me.

Eliezer_Yudkowsky: AFAICT, and correct me if I'm wrong, you didn't address Gray_Area's objection on the first post on this topic, which resolved the inconsistency to my satisfaction.

Specifically, the subjects are presented with a one shot choice between the bets. You are reading "I prefer A to B" to mean "I will write an infinite number of American options to convert B's into A's" even when that doesn't obviously follow from the choices presented. Once it becomes an arbitrarily repeatable or repeated game, Gray_Area's argument goes, they will revert to expected monetary value maximization.

And in fact, that's exactly what they did.

If someone has seen that point addressed, please link the comment or post and I will apologize. Here is the relevant part of Gray_Area's post, just to save you time:

"...the 'money pump' argument fails because you are changing the rules of the game. The original question was, I assume, asking whether you would play the game once, whereas you would presumably iterate the money pump until the pennies turn into millions. The problem, though, is if you asked people to make the original choices a million times, they would, correctly, maximize expectations. Because when you are talking about a million tries, expectations are the appropriate framework. When you are talking about 1 try, they are not."

I too initially shared Gray_Area's objection, but Eliezer did in fact address it:

If you need $24,000 for a lifesaving operation and an extra $3,000 won't help that much, then you choose 1A > 1B and 2A > 2B. If you have a million dollars in the bank account and your utility curve doesn't change much with an extra $25,000 or so, then you should choose 1B > 1A and 2B > 2A.

The comments introducing the idea of a lifesaving operation actually clarified why that objection isn't reasonable. If I need some money more than I need more money, then I should choose 1A > 1B and 2A > 2B.

[-]brent-10

well that sure was a lot of bold text.

[-]Silas-40

I actually don't mind the bold text, so much as the French wordplay :-/

(Yes, I know malaise is used in English as well. I'm making a general point.)

OK, Eliezer, let me try to turn your example around. (I think) I understand -- and agree with -- everything in this post (esp. the boldface). Nonetheless:

Assume your utility of money was linear. Imagine two financial investments (bets), 3A and 3B: 3A: 100% chance of $24,000 3B: 50% chance of $26,000, 50% chance of $24,000.

Presumably, (given a linear utility of money), you would say that you were indifferent between the two bets. Yet in actual financial investing, in the real world, you receive an (expected) return premium for accepting additional risk. Roughly speaking, expected return of an investment goes up, as the volatility of that return increases.

It seems that I could construct a money pump for YOU, out of real-world investments! You appear to be making the claim that all that matters for rational decision-making is expected value, and that volatility is not a factor at all. I think you're incorrect about that, and the actual behavior of real-world investments appears to support my position.

I'm not sure what the correct accounting for uncertainty should be in your original 1A/1B/2A/2B example. But it sure seems like you're suggesting that the ONLY thing that matters is expected value (and then some utility on the money outcomes) -- but nowhere in your calculations do I see a risk premium, some kind of straightforward penalty for volatility of outcome.

Again, if you think that rational decision-making shouldn't use such information, then I'm certain that I can find real-world investments, where you ought to accept lower returns for a volatile investment than is offered by the actual investment, and I can pocket the difference. A real-world money pump -- on you.

Or have I completely missed the point somehow?

Geddis,

I think you meant for 3B to offer a 50% chance of being lower than 3A and a 50% chance of being higher, rather than being either equal or higher?

Don Geddis, see addendum above. When you start out by saying, "Assume utility is linear in money," you beg the question.

There are three major reasons not to like volatility:

1) Not every added dollar is as useful as the last one. (When this rule is violated, you like volatility: If you need $15,000 for a lifesaving operation, you would want to double-or-nothing your $10,000 at 50/50 odds.)

2) Your investment activity has a boundary at zero, or at minus $10,000, or wherever - once you lose enough money you can no longer invest. If you random-walk a linear graph, you will eventually hit zero. Random-walking a logarithmic graph never hits zero. This means that the hit from $100 to $0 is much larger than the hit from $200 to $100 because you have nothing left to invest.

Both of these points imply that utility is not linear in money.

3) You can have opportunities to take advance preparations for known events, which changes the expected utility of those events. For example, if you know for certain that you'll get $24,000 five years later, then you can borrow $18,000 today at 6% interest and be confident of paying back the loan. Note that this action induces a sharp utility gradient in the vicinity of $24,000. It doesn't generate an Allais Paradox, unless the Allais payoff is far enough in the future that you have an opportunity to take an additional advance action in scenario 1 that is absent in scenario 2.

(Incidentally, the opportunity to take additional advance actions if the Allais payoff is far enough in the future, is by far the strongest argument in favor of trying to attach a normative interpretation to the Allais Paradox that I can think of. And come to think of it, I don't remember ever hearing it pointed out before.)

The utility here is not just the value of the money received. It's also the peace of mind knowing that money was not lost.

As other comments have pointed out, it's very important that the game is played in a once-off way, rather than repeatedly. If it's played repeatedly, then it does become a "money pump", but the game's dynamics are different for once-off, and in once-off games the "money pump" does not apply.

If someone needs to choose once-off between 1A and 1B, they'll usually choose the 100% certain option not because they're being irrational, or being inconsistent compared to the choice between 2A and 2B, but because the inherent emotional feeling of loss from having missed out on a substantial gain that was a sure thing is very unpleasant. So, people will rationally pay to avoid that emotional response.

This has to do with the make up of humans. Humans aren't always rational - what's more, it's not rational for them not to be always rational. You should be well aware of this from evolutionary studies.

I said "it's not rational for them not to be always rational": I meant to say: "it's not rational for them to be always rational".

[-]LG10

This is interesting. When I read the first post in this series about Allais, I thought it was a bit dense compared to other writing on OB. It occurred to me that you had violated your own rule of aiming very, very low in explaining things.

As it turns out, that post has generated two more posts of re-explanation, and a fair bit of controversy.

When you write that book of yours, you might want to treat these posts as a first draft, and go back to your normal policy of simple explanations 8)

"it's not rational for them to be always rational"

Everything in moderation, especially moderation.

Sorry for my typo in my example. Of course I meant to say that 3A was 100% at $24K, and 3B was 50%@$26K and 50%@22K. The whole point was for the math to come out with the same expected value at $24K, just 3B has more volatility. But I think everyone got my intent despite my typo.

Eliezer of course jumped right to the key, which is the (unrealistic) assumption of linear utility. I was going to log in this morning and suggest that the financial advice of "always get paid for accepting volatility" and/or "whenever you can reduce volatility while maintaining expected value, do so" was really a rule-of-thumb summary for common human utility functions. Which is basically what Eliezer suggested in the addendum, that log utility + Bayes results in the same financial advice.

The example I was going to try to suggest this morning, in investment theory, is diversification. If you invest in a single stock that historically returns 10% annually, but sometimes -20% and sometimes +40%, it is "better" to instead invest 1/10 of your assets in 10 such (uncorrelated) stocks. The expected return doesn't change: it's still 10% annually. But the volatility drops way down. You bunch up all the probability around the expected return (using a basket of stocks), whereas with a single stock the probabilities are far more spread out.

But probably you can get to this same conclusion with log utilities and Bayes.

My final example this morning was going to be on how you can use confidence to make further decisions, in between the time you accept the bet and the time you get the payoff. This is true, for example, for tech managers trying to get a software project out. It's far more important how reliable the programmer's estimates are, than it is what their average productivity is. The overall business can only plan (marketing, sales, retail, etc) for the reliable parts, so the utility that the business sees from the volatile productivity is vastly lower.

But again, Eliezer anticipates my objection with his point #3 in the comments, about taking out a loan today and being confident that you can pay it back in five years.

My only final question, then, is: isn't "the opportunities to take advance preparations" sufficient to resolve the original Allais Paradox, even for the naive bettors who choose the "irrational" 1A/2B combination?

Well, clearly, money should go alongside Quantum Theory in the 'bad example' dustbin. Money has been the root of most of these confusions.

Try replacing dollars with chits. The only rules with chits are that the more you have, the better it is, and you can never have too many. Their cumulative utility is fixed, i.e. you 'get as much utility' from your millionth as you do from the first. These posts aren't a discussion on the value of money.

'People sometimes make the irrational decision for rational reasons' also misses the point. As this post says, if you want to use your own heuristic for deciding how to bet, go for it. If you want to maximise your expected monetary gain using decision theory, well, here's how you do it.

[-]Steve4-10

The last line really helped me see where you are coming from: You take the expectation of the utility of the money, not the utility of the expectation of the money.

However, at http://lesswrong.com/lw/hl/lotteries_a_waste_of_hope/, you argue against playing the lotto because the utility of the expectation itself is very bad. Now granted, the expectation of the utility is also not great, but let's say the lotto offered enough of a jackpot (with the same long odds) to offer an appropriate expectation of utility. Wouldn't you still be arguing that it is a "hope sink", thus focusing on the utility of the expectation?

Most mathematically-competent commenters agreed that the expected utility of lotteries was bad. Some people disagreed that the utility of expectation was bad, though. Yudkowsky was arguing against these commenters, saying that both expected utility and utility of expectation are bad. The arguments in the post you linked are not the main reasons Yudkowsky does not play the lottery, but rather the arguments that convey the most new information about the lottery (and whatever the lottery is being used to illustrate).

Steve,

The lottery could be a good deal and still be bad. Successful thieves are also socially harmful.

Yes, but I think the point of his lottery article was that it was a bad deal for the individual player, and not just because it had a negative expected value; he was making the point that the actual existence of the (slim) possibility of riches was itself harmful. And he was not focusing on whether one actually won the lotto, he was focusing on the utility of actually having the chance of winning (as opposed to the utility of actually winning).

[-]Lee-30

Eliezer, I think your argument is flat-out invalid.

Here is the form of your argument: "You prefer X. This does not strike people as foolish. But if you always prefer X, it would be foolish. Therefore your preference really is foolish."

That conclusion does not follow without the premise "You always prefer X if you ever prefer X."

More plainly, you are supposing that there is some long run over which you could "pump money" from someone who expressed such-and-such a preference. BUT my preference over infinitely many repeated trials is not the same as my preference over one trial. AND You cannot demonstrate that that is absurd.

[-]Lee4-10

To say it another way, Eliezer, I share your intution that preferences that look silly over repeated trials are sometimes to be avoided. But I think they are not always to be avoided.

This sort of intution disagreement exists in other areas. Consider the intution that an act, X, is immoral if it cannot be universalized. This intution is often articulated as the objection "But what if everyone did X?"

Some people think this objection has real punch. Other people do not feel the punch at all, and simply reply, "But not everyone does X."

Similarly, you think there is real punch in saying, "But what if you had that preference over repeated dealings?"

I do not feel the punch, and I can only reply, "But I do not have that preference over repeated dealings."

[-]PK00

I have a few questions about utility(hopefully this will clear my confusion). Someone please answer. Also, the following post contains math, viewer discretion is advised(the math is very simple however).

Suppose you have a choice between two games...

A: 1 game of 100% chance to win $1'000'000 B: 2 games of 50% chance to win $1'000'000 and 50% chance to win nothing

Which is better A, B or are they equivalent? Which game would you pick? Please answer before reading the rest of my rambling.

Lets try to calculate utility.

For A, A: Utotal = 100%U[$1'000'000] + 0%U[$0]

For B, I see two possible ways to calculate it.

1)Calculate the utility for one game and multiply it by two B-1: U1game = 50%U[$1'000'000] + 50%U[$0] B-1: Utotal = U2games = 2U1game = 2{50%U[$1'000'000] + 50%U[$0]}

2)Calculate all possible outcomes of money possession after 2 games. The possibilities are: $0 , $0 $0 , $1'000'000 $1'000'000 , $0 $1'000'000 , $1'000'000

B-2: Utotal = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000]

If we assume utility is linear: U[$0] = 0 U[$1'000'000] = 1 U[$2'000'000] = 2 A: Utotal = 100%[$1'000'000] + 0%U[$0] = 100%1 + 0%0 = 1 B-1: Utotal = 2{50%U[$1'000'000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Utotal = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 25%0 + 25%1 + 25%1 + 25%2 = 1 The math is so neat!

The weirdness begins when the utility of money is non linear. $2'000'000 isn't twice as useful as $1'000'000 (unless we split that $2'000'000 between 2 people, but lets deal with one weirdness at a time). With the first million one can by a house, a car, quit their crappy job and pursue their own interests. The second million won't change the persons' life as much and the 3d even less.

Lets invent more realistic utilities(it has also been suggested that the utility of money is logarithmic but I'm having some trouble taking the log of 0): U[$0] = 0 U[$1'000'000] = 1 U[$2'000'000] = 1.1 (reduced from 2 to 1.1)

A: Utotal = 100%[$1'000'000] + 0%U[$0] = 100%1 + 0%0 = 1 B-1: Utotal = 2{50%U[$1'000'000] + 50%U[0]} = 2{50%1 + 50%0} = 1 B-2: Utotal = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 25%0 + 25%1 + 25%1 + 25%*1.1 = 0.775

Hmmmm... B-1 is not equal to B-2. Either I have to change around utility function values or discard one of them as the wrong calculation or some other mistake I didn't think of. Maybe U[$0] != 0.

Starting with the assumption that B-1 = B-2 (U[$1'000'000] = 1 U[$2'000'000] = 1.1), then 2{50%U[$1'000'000] + 50%U[0]} = 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%*U[$2'000'000]

solving for U[$0]: 2{50%1 + 50%U[0]} = 25%U[$0] + 25%1 + 25%1 + 25%1.1 1 + U[$0] = 0.25U[$0] + 0.775 0.75*U[$0] = -0.225 U[$0] = -0.3

B-1 = B-2 = 0.7 Intuitively this kind of makes sense. Comparing: A: 100%[$1'000'000] = 50%U[$1'000'000] + 50%U[$1'000'000] to B: 25%U[$0] + 25%U[$1'000'000] + 25%U[$1'000'000] + 25%U[$2'000'000] = 50%U[$1'000'000] + 25%U[$0] + 25%U[$2'000'000]

A (=/>/<)? B 50%U[$1'000'000] + 50%U[$1'000'000] (=/>/<)? 50%U[$1'000'000] + 25%U[$0] + 25%U[$2'000'000] the first 50% is the same so it cancels out 50%U[$1'000'000] (=/>/<)? 25%U[$0] + 25%U[$2'000'000] 0.5 > 0.2 The chance to win 2 million doesn't outweigh how much it would suck to win nothing so therefore the certainty of 1 million is preferable. The negative utility of U[$0] is absorbed by it's 0 probability coefficient in A.

Or maybe calculation B-1 is just plain wrong, but that would mean we cannot calculate the utility of discrete events and add the utilities up.

Is any of this correct? What kind of calculations would you do?

A bird in the hand is indeed worth 2 in the bush.

B-1 is wrong because you're not using marginal utility. On the second repetition, U(marginal)[$1,000,000] is either 1 or 0.1 depending on whether you lost or won on the first play. You can still add the utilities of events up, but the first and second plays are different events, utility-wise, so you can't multiply by 2. The correct expression is:

(50%U($0) + 50%U(the first million)) + (50%U($0) + 50%(50%U(the first million) + 50%U(the second million)))

which comes out to .775.

[-]PK20

I see. Thank you Nick. I was confused by the idea that utility could be proportional. e.i. 100%U[$1'000'000] = { 50%U[$1'000'000] } 2 because when you put 2 50%U[$1'000'000] the utility was less than 100%*U[$1'000'000]. But that was because U[$1'000'000] = U[$1'000'000] is not always true depending on if it's the 1st or 2nd million. U[$1'000'000] going from $0 to $1'000'000 is not the same as U[$1'000'000] going from $1'000'000 to $2'000'000.

Back to Alais:

  • 1A. 100%U[$24,000] + 0%U[$0]
  • 1B. 33/34U[$27,000] + 1/34U[$0]
  • 2A. 34%U[$24,000] + 66%U[$0]
  • 2B. 33%U[$27,000] + 67%U[$0]

In 1A, U[$24,000] is going from $0 to $24,000. In 1A, U[$0] is going from $0 to $0. In 1B, U[$27,000] is going from $0 to $27,000. In 1B, U[$0] is going from $0 to $0. In 2A, U[$24,000] is going from $0 to $24,000. In 2A, U[$0] is going from $0 to $0. In 2B, U[$27,000] is going from $0 to $27,000. In 2B, U[$0] is going from $0 to $0. Looks like all the variables, U[money] = U[money] hold.

So if 1A > 1B then 100%U[$24,000] + 0%U[$0] > 33/34U[$27,000] + 1/34U[$0]

  • multiply by 34% 34%( 100%U[$24,000] + 0%U[$0] ) > 34%( 33/34U[$27,000] + 1/34U[$0] )
  • add 66%U[$0] (which makes the total percentages add up to 100%) 34%( 100%U[$24,000] + 0%U[$0] ) + 66%U[$0] > 34%( 33/34U[$27,000] + 1/34U[$0] ) 66%*U[$0]
  • algebra 34%U[$24,000] + 0%U[$0] + 66%U[$0] > 33%U[$27,000] + 1%U[$0] + 66%U[$0]
  • more algebra 34%U[$24,000] + 66%U[$0] > 33%U[$27,000] + 67%U[$0]
  • meaning 2A > 2B if 1A > 1B

/sigh, all this time to rediscover the obvious.

Perhaps a lot of confusion could have been avoided if the point had been stated thus:

One's decision should be no different even if the odds of the situation arising that requires the decision are different.

Footnote against nitpicking: this ignores the cost of making the decision itself. We may choose to gather less information and not think as hard for decisions about situations that are unlikely to arise. That factor isn't relevant in the example at hand.

actually... I don't agree with this example as being a good example of intuition failing. the problem is people think about this scenario as if it were real life. in real life there would be a delayed payout. in the case of a delayed payout on their "ticket" the ticket with 100% certainty is more LIQUID than the ticket with the better expectation. liquidity itself has utility. maybe the liquidity of the certain payoff is only due to the rest of society being dumb; however even if that is the case if you know the rest of society is dumb you must take that into account when making your decision. in this case the brain does not seem to be wrong and seems to actually be choosing correctly. the brain is just taking your example and adding lots of extra details to it to make it feel more realistic (this is certainly an undesired effect for researchers trying to learn about people's thoughts or interests but who cares about them). the brain often adds a bunch of assumed details to a confusing situation, this is basically the definition of how intuition works. now, you have to consider the odds of this exact example coming up or the odds of the imagined example coming up... and how well the brain will likely handle each situation... then use that information to determine if the brain is actually mistaken or not.

in the case of electronic store warranties they usually aren't worthwhile because they are designed to not be worthwhile. just like mail-in rebates are designed to often go unredeemed... however in the case where your personal time is more valuable by far than any of the costs, it starts to make sense.

on another note how rich did feynmann or kac get? (either a ton, or not that much depending on if they wanted to help people or take their pennies!)