You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

shminux comments on Zeckhauser's roulette - Less Wrong Discussion

11 Post author: cousin_it 19 January 2012 07:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread.

Comment author: shminux 19 January 2012 09:57:03PM *  2 points [-]

A really bad example, since they didn't tell you how much your life is worth to you.

I value my life high enough to pay ALL I HAVE (and try to borrow some) to increase my survival odds from 66% to 100% or from 33% to 50%. (If you don't value your life high enough, substitute it for that of your child in the question.) The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level. For example, deciding whether to pay more for a safer car, because the improved collision survival odds increase your life expectancy by 1 day (I made up the number, not sure what the real value is).

Now, if you frame the question as "Q1: you have a 66% chance of winning $1000, how much would you pay to increase it to 100%? vs Q2: you have a 33% chance of winning $1000, how much would you pay to increase it to 50%?". In this example your life is worth $1000, a number small enough to be affordable. The answer is clear: your expected win increases 33% in Q1 and half that in Q2, so you should pay $333 or less in Q1 and $166 or less in Q2.

So, where does the author go wrong?

Question A: You’re playing with a six-shooter that contains two bullets. How much would you pay to remove them both? (This is the same as Question 1.)

Question B: You’re playing with a three-shooter that contains one bullet. How much would you pay to remove that bullet?

Question C: There’s a 50% chance you’ll be summarily executed and a 50% chance you’ll be forced to play Russian roulette with a three-shooter containing one bullet. How much would you pay to remove that bullet?

QA: 66%->100%. QB: 66%->100%, QC: 33%->50%. The author's logic "In Question C, half the time you’re dead anyway. The other half the time you’re right back in Question B. So surely questions C and B should have the same answer." breaks down, because they mix finite costs ($1000) in this case with infinite ones ("dead anyway", i.e. infinite loss), leading to a contradiction.

Comment author: DanielLC 19 January 2012 11:52:07PM 2 points [-]

A really bad example, since they didn't tell you how much your life is worth to you.

It only changes what you'd pay proportionately, so it wouldn't make a difference.

The real problem is that they didn't tell you how much you're capable of paying. Let's assume you can pay an infinite amount. Perhaps they torture you for a period of time.

So, where does the author go wrong?

provided that you don't have heirs and all your remaining money magically disappears when you die.

Your money is only valuable if you survive. Think of it as them reducing your winnings. It doesn't matter if you don't win. In that case, you should be willing to have them reduce it by $333 in either case.

because they mix finite costs ($1000) in this case with infinite ones ("dead anyway", i.e. infinite loss)

If your utility function works like this, you can just abandon the finite part. It's effectively impossible for it to come up, and it's not really worth thinking about.

Also, you seemed to imply that it was a finite (though high) cost earlier.

The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level.

Why would noise level matter?

Comment author: shminux 20 January 2012 12:24:40AM 0 points [-]

A really bad example, since they didn't tell you how much your life is worth to you.

It only changes what you'd pay proportionately, so it wouldn't make a difference.

No, because, as you say:

The real problem is that they didn't tell you how much you're capable of paying.

I implied the same ("pay ALL I HAVE (and try to borrow some)"), if maybe not as succinctly.

Your money is only valuable if you survive. Think of it as them reducing your winnings. It doesn't matter if you don't win. In that case, you should be willing to have them reduce it by $333 in either case.

all your remaining money magically disappears when you die.

Right, I ignored this last condition, which breaks the assumption of "your life is worth $1000" if you have more than that in your bank account. However, in that case there is no way to limit your bet, and the problem becomes meaningless:

If your utility function works like this, you can just abandon the finite part. It's effectively impossible for it to come up, and it's not really worth thinking about.

It's not mine, it's theirs (you lose everything you own, no matter how much). Which supports my point of a badly stated problem.

Comment author: orthonormal 19 January 2012 10:41:05PM -1 points [-]

If dying were an infinite loss to you, you'd never drive an extra mile just to save money on buying something (let alone all the other small risks you take).

Comment author: jmmcd 19 January 2012 10:57:45PM 2 points [-]

I think this answers your comment:

The only time actually estimating cost comes into play is when the risk change is small enough to be close to the noise level. For example, deciding whether to pay more for a safer car, because the improved collision survival odds increase your life expectancy by 1 day (I made up the number, not sure what the real value is).

Comment author: shminux 19 January 2012 10:47:58PM *  0 points [-]

Not to me, to the author of the question. This is the contradiction I am pointing out: a failure to put a value on the 50% chance of death in Question C.