orthonormal comments on Winning the Unwinnable - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (51)
If the expected value for buying all of the tickets is positive, wouldn't the expected value of any particular ticket be positive? Does the math require you to buy all of the tickets?
A small example:
5 numbers that each cost $1 with payouts of $4 for 1st pick and $2 for 2nd pick. Any ticket has a 1/5 chance of paying $4, a 1/5 chance of paying $2, and a 3/5 chance of paying $0.
.2 * $4 + .2 * $2 + .6 * 0 = $1.2
Buying all of the tickets will give you $6 for spending $5, which is a profit of $1.2 per dollar invested. So... what am I missing? It seems like if it was good for you to spend $41 million it was good for you to spend $1. Is it a matter of risk management or something like that? This isn't really my area of expertise.
As expected value ≠ expected utility, it's not the case that you should always buy a ticket if expected value is positive. It's a standard result that people actually treat the utility of wealth roughly logarithmically: i.e. that it's better to have a net worth of $1,000,000,000 than $100,000,000, but not that much better compared to how much better $100,000,000 is than $1000 net worth.
To simplify the lottery situation in the case of extreme probabilities and payouts, say that Omega offers a lottery only to you (no worries about split jackpots), in which there are exactly 1,000,000 tickets, each costing $1, and among them there is one winning ticket that pays out $2,000,000.
Now if you can scrounge up a million dollars to buy every ticket, you make a tidy $1 million profit (less interest from your backers) with zero risk, so the expected utility is very positive for this strategy.
If, however, you can only get $100,000 together, you shouldn't buy any tickets (unless you're a millionaire to start), since the utility to you of a 90% chance of losing $100,000 (and having a pretty crappy life being so far in debt) outweighs the utility of a 10% chance of winning $2 million (and a nice standard of living).
Got it. This totally answered my question.
or is it just a standard assumption? I've never heard anything more precise than declining marginal utility.
Hmm, good question. Quick Google search doesn't turn up anything...
Logarithmic u-functions have an uncomfortable requirement that you must be indifferent to your current wealth and a 50-50 shot at doubling or halving it (e.g. doubling or halving every paycheck/payment you get for the rest of your life). Most people I know don't like that deal.
I'm confused about what is uncomfortable about this, or what function of wealth you would measure utility by.
Naively it seems that logarithmic functions would be more risk averse than nth root functions which I have seen Robin Hanson use. How would a u-function be more sensitive to current wealth?
I think the uncomfortable part is that bill's (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.
I'd suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))... If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.
That's only a requirement for risk-neutral people. Most people you know are not risk-neutral.
Logarithmic utility functions are already risk-averse by virtue of their concavity. The expected value of a 50% chance of doubling or halving is a 25% gain.
I would say that such a person doesn't have preferences representable by a utility function.
That's just plain false. Risk-aversion is a valid preference, and can be included as a term in a utility function (at slight risk of circularity, but that's not really a problem).
ETA: well, the stated units were utils, so risk-aversion should be included, so I think you're correct.
I don't think opportunities to make choices are usually considered to be in the domain of a utility function. (If I'm wrong, educate me. I'd appreciate it.)
Nitpick: you put the values in utiles, which should include risk-aversion. If you put the values in dollars or something, I would agree.
Pretty sure it's the standard result that people don't consistently assign utilities to levels of wealth.