mattnewport comments on Is Rationality Maximization of Expected Value? - Less Wrong

-23 Post author: AnlamK 22 September 2010 11:16PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (64)

You are viewing a single comment's thread. Show more comments above.

Comment author: mattnewport 28 September 2010 06:13:32AM *  0 points [-]

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

A coin flip is not fundamentally a less singular non-repeating event than the weather at a specific location and specific time. There are no true repeating events on a macro scale if you specify location and time. The relevant difference is how confident you can be that past events are good predictors of the probability of future events. Pretty confident for a coin toss, less so for weather. Note however that if your probability estimates are sufficiently accurate / well-calibrated you can make money by betting on lots of dissimilar events. See for example how insurance companies, hedge funds, professional sports bettors, bookies and banks make much of their income.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

I believe there is a sense in which small probabilities can be said to also have an associated uncertainty not directly captured by the simple real number representing your best guess probability. I was involved in a discussion on this point here recently.

Comment author: AnlamK 28 September 2010 09:48:06AM *  0 points [-]

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

How small should x be? And if the argument does hold, are you going to have two different criteria for rational behavior - one with events where probability of positive outcome is 1-x and one that isn't.

And also, from Nick Bostrom's piece (formatting will be messed up):

Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life. ... Pascal hands over his wallet [to the Mugger].

Of course, by your reasoning, you would hand your wallet. Bravo.