Two or three months ago, my trip to Las Vegas made me ponder the following: If all gambles in the casinos have negative expected values, why do people still engage in gambling - especially my friends fairly well-versed in probability/statistics?

Suffice it to say, I still have not answered that question. 

On the other hand, this did lead me to ponder more about whether rational behavior always involves making choices with the highest expected (or positive) value - call this Rationality-Expectation (R-E) hypothesis.

Here I'd like to offer some counterexamples that show R-E is clearly false, to me at least. (In hindsight, these look fairly trivial but some commentators on this site speak as if maximizing expectation is somehow constitutive of rational decision making - as I used to. So, it may be interesting for those people at the very least.)

  • Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.
  • Suppose someone offers you a (single trial) gamble B in which you stand to lose 100k dollars with probability 0.99 and stand to gain 100M dollars with probability 0.01. Even though expectation is 98999000 dollars, you should not take the gamble since the probability of losing on a single trial is very high - 0.99 to be exact.

A is a gamble that shows that choices with negative expectation can sometimes lead to net pay off.

B is a gamble that shows that choices with positive expectation can sometimes lead to net costs.

As I'm sure you've all noticed, expectation is only meaningful in decision-making when the number of trials in question can be large (or more precisely, large enough relative to the variance of the random variable in question). This, I think, in essence is another way of looking at Weak Law of Large Numbers.

In general, most (all? few?) statistical concepts make sense only when we have trials numerous enough relative to the variance of the quantities in question.

This makes me ponder a deeper question, nonetheless.

Does it make sense to speak of probabilities only when you have numerous enough trials? Can we speak of probabilities for singular, non-repeating events?

New Comment
65 comments, sorted by Click to highlight new comments since:

Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.

If I can find another 99 people as confused as you I'll be a rich man.

If I can find another 99 people as confused as you I'll be a rich man.

You would also need them to have $100M available to lose.

You would also need them to have $100M available to lose.

That is a weakness with my plan.

Oh well. Fold the plan into your back pocket and wait for hyperinflation.

Or just drop a few zeroes off of the numbers and do it now, as if you'd come up with the idea a couple hundred years ago and the inflation up to this point counts as 'hyper'.

[-]AnlamK-30

Wouw... Thank you for this charitable interpretation. I'll try to respond.

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

(2) I've specifically said that if you have enough trials to converge to the expectation (i.e. the point about Weak Law of Large Numbers), then the point I'm making doesn't hold.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

What's your x, sunshine? If 0.01 isn't small enough, pick a suitably small x. Nick Bostrom in Pascal's mugging picks 1 over quadrillion to demonstrate a very similar point. I picked 0.01 since I thought concrete values would demonstrate the point more clearly - I feel like they've been more confusing.

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

(4) Besides, even if we construe the gamble being repeated like a coin toss, I feel like with 0.99^99 = 0.37, you stand to lose 10M with probability 0.37 . I don't know about you but I wouldn't risk 10M with those kinds of odds. It helps to be precise when you can and not to go with a heuristic like "on average there should be 1 W in every 100 trial"...

(1) You don't have to construe the gamble as some sort of coin flips. It could also be something like "the weather in Santa Clara, California in 20 September 2012 will be sunny" - i.e. a singular non-repeating event, in which case having 100 hundred people (as confused as me) will not help you.

A coin flip is not fundamentally a less singular non-repeating event than the weather at a specific location and specific time. There are no true repeating events on a macro scale if you specify location and time. The relevant difference is how confident you can be that past events are good predictors of the probability of future events. Pretty confident for a coin toss, less so for weather. Note however that if your probability estimates are sufficiently accurate / well-calibrated you can make money by betting on lots of dissimilar events. See for example how insurance companies, hedge funds, professional sports bettors, bookies and banks make much of their income.

(3) Besides, suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

I believe there is a sense in which small probabilities can be said to also have an associated uncertainty not directly captured by the simple real number representing your best guess probability. I was involved in a discussion on this point here recently.

'Small enough' here would have to be very much smaller than 1 in 100 for this argument to begin to apply. It would have to be 'so small that it won't happen before the heat death of the universe' scale. I'm still not sure the argument works even in that case.

How small should x be? And if the argument does hold, are you going to have two different criteria for rational behavior - one with events where probability of positive outcome is 1-x and one that isn't.

And also, from Nick Bostrom's piece (formatting will be messed up):

Mugger: Good. Now we will do some maths. Let us say that the 10 livres that you have in your wallet are worth to you the equivalent of one happy day. Let’s call this quantity of good 1 Util. So I ask you to give up 1 Util. In return, I could promise to perform the magic tomorrow that will give you an extra 10 quadrillion happy days, i.e. 10 quadrillion Utils. Since you say there is a 1 in 10 quadrillion probability that I will fulfil my promise, this would be a fair deal. The expected Utility for you would be zero. But I feel generous this evening, and I will make you a better deal: If you hand me your wallet, I will perform magic that will give you an extra 1,000 quadrillion happy days of life. ... Pascal hands over his wallet [to the Mugger].

Of course, by your reasoning, you would hand your wallet. Bravo.

Maximize expected utility, not expected money.

Your intuitions in the examples that maximizing expected money is wrong is because you do not value money linearly on that scale.

What exactly does maximizing expected utility yield in these particular cases?

For one, I could be convinced not to take A (0.01 could be too risky) but I would never take B.

I feel that if maximization of expected utility involves averaging probabilities of outcomes weighted by payoffs, then it's going to suffer from similar difficulties.

What exactly does maximizing expected utility yield in these particular cases?

For one, I could be convinced not to take A (0.01 could be too risky) but I would never take B.

Depends on how much money you currently have. According to the simple logarithmic model, you should take gamble B if your net worth is at least $2.8M.

[-]neq1130

In the first example, you couldn't play unless you had at least 100M dollars of assets. Why would someone with that much money risk 100M to win a measly 100K, when the expected payoff is so bad?

Yeah, uhm, I figured I'd misunderstood that, because my second hypothesis was that someone was trolling us. Looking at the poster's previous comments I'm more inclined to think that he just missed the whole 'Bayes is god' meme.

Sorry that talking about money lead to confusion. I guess the point I was making was the following. See my respond to mattnewport, i.e.:

Suppose you have a gamble Z with negative expectation with probability of a positive outcome 1-x, for a very small x. I claim that for small enough x, every one should take Z - despite the negative expectation.

Your implied point about expected utility is way off but...

Suppose someone offers you a (single trial) gamble A in which you stand to gain 100k dollars with probability 0.99 and stand to lose 100M dollars with probability 0.01. Even though expectation is -98999000 dollars, you should still take the gamble since the probability of winning on a single trial is very high - 0.99 to be exact.

I would take it. I will probably gain $100k and if I lose $100M then I will just declare bankruptcy. This is approximately the decision banks make when they take irresponsible risks and can be expected to be bailed out by similarly irresponsible government.

Does it make sense to speak of probabilities only when you have numerous enough trials?

No, probability theory also has non-frequency applications.

Can we speak of probabilities for singular, non-repeating events?

Yes. This is the core of a Bayesian approach to decision making. The usual interpretation is that the probabilities reflect your state of knowledge about events rather than frequencies of actual event outcomes. Try starting with the LW wiki article on Baesian probability and the blog posts linked therefrom.

Assigning a non-repeating event the probability P means that, for a well-calibrated agent, if you assign 100 different things this probability then 100 * P of them will actually occur. I believe this is a standard interpretation of Bayesian probability, and it puts things in terms of frequencies of actual event outcomes.

ETA: Alternatively, one may think of Bayesian probability as the answer to the question "if I believed this statement, then in which fraction P of all plausible worlds in which I ended up with this information would I be correct"?

I have to disagree with this interpretation. The whole point is that the frequency interpretation of probability can be a specific case of the Bayesian (probability = belief) interpretation, but not vice versa.

If I say I belief in the existence of aliens with 0.2 belief i think its non-intuitive and unrealistic that what im really saying is, "i think aliens exist in 20% of all plausible worlds". Apart from the difficulty in clearly defining 'plausible' the point of Bayesianism is that this simply represents my state of knowledge/belief.

I find Bayesian probability to be meaningless unless you connect it to a pseudo-frequentist interpretation somehow. Sure, you can say "Bayesian probability measures my subjective belief in something", but in that case, what does having a 20% subjective belief in something actually mean, and how's it different from having an 80% subjective belief? You need some scheme of translating it from a meaningless number to an expectation, and all such translations (e.g. in terms of betting behavior) always end up being pseudo-frequentist somehow.

[-]Jack00

The traditional way of defining the degree of a belief held by some agent is by finding what the agent thinks is a fair wager on the proposition. Is that pseudo-frequentist in a way I'm not seeing?

Obviously, this needs more discussion but the kind of thought I was trying to motivate was the following:

How is that saying a non-repeating singular event has a very small probability of occurring different from saying it will not happen?

This was motivated by the lottery paradox. Questions like, when you buy a lottery ticket, you don't believe you will win, so why are you buying it?

Examples like these sort of pull my intuitions towards thinking no, it doesn't make sense to speak of probabilities for certain events.

The whole nonlinear utility thing makes this specific point wrong, but:

It seems like the main counter-intuitive part of expected utility theory (or counter-expected utility theory part of intuition) is just this type of question. See: Pascal's Mugging.

Humans tend to be loathe to trade of high probabilities of small benefits for low probabilities of big benefits in cases where linearity is very plausible, such as # of people saved.

But people seem to just as often make the opposite mistake about various scary risks.

Are people just bad at dealing with small probabilities?

What does that mean for coming to a reflective equilibrium about ethics?

Are people just bad at dealing with small probabilities?

It seems like a reasonable heuristic that small probabilities are also likely to be uncertain probabilities (due to being associated with rare events and therefore limited numbers of observations). This may explain some of the apparent paradoxes around how people deal with low probability events but I'd have to think a bit more about what it implies.

uncertain probabilities

Although I probably agree with your point, the chosen formulation is weird. The uncertainty is hidden in the probability, "uncertain probabilities" is sort of pleonasm. I like this comment, especially

The frequency with which a coin comes up heads isn't a probability, no matter how much it looks like one. This is what's going wrong in the heads of people who say things like "The probability is either 1 or 0, but I don't know which."

Although I probably agree with your point, the chosen formulation is weird. The uncertainty is hidden in the probability, "uncertain probabilities" is sort of pleonasm.

I did spend some time thinking about exactly what this means after writing it. It seems to me there is a meaningful sense in which probabilities can be more or less uncertain and I haven't seen it well dealt with by discussions of probability here. If I have a coin which I have run various tests on and convinced myself it is fair then I am fairly certain the probability of it coming up heads is 0.5. I think the probability of the Republicans gaining control of Congress in November is 0.7 but I am less certain about this probability. I think this uncertainty reflects some meaningful property of my state of knowledge.

I tentatively think that this sense of 'certainty' reflects something about the level of confidence I have in the models of the world from which these probabilities derive. It also possibly reflects something about my sense of what fraction of all the non-negligibly relevant information that exists I have actually used to reach my estimate. Another possible interpretation of this sense of certainty is a probability estimate for how likely I am to encounter information in the future which would significantly change my current probability estimate. A probability I am certain about is one I expect to be robust to the kinds of sensory input I think I might encounter in the future.

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty. In the context of the original comment, low probabilities are associated with rare events and as such are the kinds of thing we might expect to have a very incomplete model of or a very sparse sampling of relevant data for. They are probabilities which we might expect to easily double or halve in response to the acquisition of a relatively small amount of new sensory data.

Perhaps it's as simple as how much you update when someone offers to make a bet with you. If you suspect your model is incomplete or you lack much of the relevant data then someone offering to make a bet with you will make you suspect they know something you don't and so update your estimate significantly.

It seems to me there is a meaningful sense in which probabilities can be more or less uncertain

Here's another example. Suppose you're drawing balls from a large bin. You know the bin has red and white balls, but you don't know how many there are of each.

After drawing two balls, you have one white and one red ball.

After drawing 100,000 balls, you have 50,000 white and 50,000 red balls.

In both cases you might assign a probability of .5 for drawing a white ball next, but it seems like in the n = 100,000 case you should be more certain of this probability than in the n = 2 case.

One could try to account for this by adding an extra criteria that specifies whether or not you expect your probability estimate to change. E.g. in the n = 10,000 case you're .5 certain of drawing a white ball next, and .99 certain of this estimate not changing regardless of how many more trials you conduct. In the n = 2 case you would still be .5 certain of drawing a white ball next, but only .05 (or whatever your prior) certain of this being the probability you'll eventually end up converging on.

This is the approach taken in Probabilistic Logic Networks, which uses 'indefinite probabilities' of the form <[L,U], b, k>. This stands roughly for "I assign a probability of b to the hypothesis that, after having observed k more pieces of evidence, the truth value I assign to S will lie in the interval [L, U]".

Yes. I think this sense of how 'certain' I am about a probability probably corresponds to some larger scale property of a Bayesian network (some measure of how robust a particular probability is to new input data) but for humans using math to help with reasoning it might well be useful to have a more direct way of working with this concept.

This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I'll sketch my ideas. (It's unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)

There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes' rule can only give one answer.

However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.

Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.

However, we also have an estimate of another quantity, namely 'the probability that the coin comes up heads' (generically). I'm going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)

What is important to realize is that the statement 'the coin will come up heads with probability 50%' means 'I believe that in half of all conceivable universes the coin will come up heads', whereas 'I am 90% confident that the coin will come up heads with probability 50%' means something more along the lines of 'I believe that in 90% of all conceivable universes my models predict a 50% chance of heads'. But there is also the difference that in the second statement, the '90% of all conceivable universes' only actually specifies them up to the extent that our models need in order to take over.

I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my 'confidence in a probability' corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don't usually get to see repeated trials (I don't think this should matter, but unfortunately my intuition is frequentist).

[-][anonymous]00

This is also a problem I have thought about a bit. I plan to think about it more, organize my thoughts, and hopefully make a post about it soon, but in the meantime I'll sketch my ideas. (It's unfortunate that this comment appeared in a post that was so severely downvoted, as less people are likely to think about it now.)

There is no sense in which an absolute probability can be uncertain. Given our priors, and the data we have, Bayes' rule can only give one answer.

However, there is a sense in which conditional probability can be uncertain. Since all probabilities in reality are conditional (at the very least, we have to condition on our thought process making any sense at all), it will be quite common in practice to feel uncertain about a probability, and to be well-justified in doing so.

Let me illustrate with the coin example. When I say that the next flip has a 50% chance of coming up heads, what I really mean is that the coin will come up heads in half of all universes that I can imagine (weighted by likelihood of occurrence) that are consistent with my observations so far.

However, we also have an estimate of another quantity, namely 'the probability that the coin comes up heads' (generically). I'm going to call this the weight of the coin since that is the colloquial term. When we say that we are 50% confident that the coin comes up heads (and that we have a high degree of confidence in our estimate), we really mean that we believe that the distribution over the weight of the coin is tightly concentrated about one-half. This will be the case after 10,000 flips, but not after 5 flips. (In fact after N heads and N tails, a weight of x has probability proportional to [x(1-x)] ^N.)

What is important to realize is that the statement 'the coin will come up heads with probability 50%' means 'I believe that in half of all conceivable universes the coin will come up heads', whereas 'I am 90% confident that the coin will come up heads with probability 50%' means something more along the lines of 'I believe that in 90% of all conceivable universes my models predict a 50% chance of heads'. But there is also the difference that in the second statement, the '90% of all conceivable universes' only actually specifies them up to the extent that our models need in order to take over.

I think that this is similar to what humans do when they express confidence in a probability. However, there is an important difference, as in the previous case my 'confidence in a probability' corresponded to some hidden parameter that dictated the results of the coin under repeated trials. The hidden parameter in most real-world situations is far less clear, and we also don't usually get to see repeated trials (I don't think this should matter, but unfortunately my intuition is frequentist).

This sense of how certain or uncertain a probability is may have no place in a perfect Bayesian reasoner but I think it is meaningful information to consider as a human making decisions under uncertainty.

I don't think the key issue is the imperfect Bayesianism of humans. I suppose that the discussed certainty of a probability has a lot to do with its dependence on priors - the more sensitive the probability is to change in priors we find arbitrary, the less certain it feels. Priors themselves feel most uncertain, while probabilities obtained from evidence-based calculations, especially those quasi-frequentist probabilities, as P(heads in next flip), depend on many priors and change in any single prior doesn't move them too far. Perfect Bayesians may not have the feeling, but still have priors.

Sensitivity to priors is the same as sensitivity to new evidence. And when we're sensitive to new evidence, our estimates are likely to change, which is another reason they're uncertain.

The reason this phenomena occurs is because we are uncertain about some fundamental frequency, or a model more complex than a simple frequency model, and probability(heads|frequency of heads is x)=x.

I think there's something to what you say but a perfect bayesian (or an imperfect human for that matter) is conditional probabilities all the way down. When we talk about our priors regarding a particular question they are really just the output of another chain of reasoning. The boundaries we draw to make discussion feasible are somewhat arbitrary (though they would probably reflect specific mathematical properties of the underlying network for a perfect Bayesian reasoner).

Do you think the chain of reasoning is infinite? For actual humans there is certainly some boundary under which the prior no more feels as an output of further computation, although such beliefs could have been influenced by earlier observations either subconsciously, or consciously while this fact has been forgotten later. Especially in the former case, I think the reasoning leading to such beliefs is very likely to be flawed, so it seems fair to treat such beliefs as genuine priors, even if, strictly speaking, they were physically influenced by evidence.

A perfect Bayesian, on the other hand, should be immune to flawed reasoning, but still it has to be finite, so I suppose it must have some genuine priors which are part of its immutable hardware. I imagine it in an analogy with formal systems, which have a finite set of axioms (or an infinite set defined by a finite set of conditions) and a finite set of derivation rules, and a set of theorems consisting of axioms and derived statements. For a Bayesian, axioms are replaced by several statements with associated priors, there is the Bayes' theorem among the derivation rules, and instead of a set of theorems, it has a set of encountered statements with attached probability. Possible issues are:

  • If such formal construction is possible, there should be a lot of literature about it, and I am unaware of any (but I didn't try to find too hard), and
  • I am not sure whether such an approach isn't obsolete in the light of discussions about updateless decision theories and similar stuff.

Do you think the chain of reasoning is infinite?

Not infinite but for humans all priors (or their non-strict-Bayesian equivalent at least) ultimately derive either from sensory input over the individual's lifetime or from millions of years of evolution baking in some 'hard-coded' priors to the human brain.

When dealing with any particular question you essentially draw a somewhat arbitrary line and lump millions of years of accumulated sensory input and evolutionary 'learning' together with a lifetime of actual learning and assign a single real number to it and call it a 'prior' but this is just a way of making calculation tractable.

It seems like a reasonable heuristic that small probabilities are also likely to be uncertain probabilities (due to being associated with rare events and therefore limited numbers of observations).

The occurrence of very low probability events is also indicative of unaccounted for structural uncertainty. Taking into account both where I find myself in the multiverse as well as thinking seriously about anthropic reasoning led to me being really confused (and I still am, but less so). I think it was good that I became confused and didn't just think "Oh, according to my model, a really low probability event just happened to me, how cool is that?" It wouldn't surprise me all that much if there was a basic evolutionary adaptation not to trust one's models after heavily unanticipated events, and this may generalize to being distrustful of small probabilities in general. (But I'm postulating an evolutionary adaptation for rationality based on almost no evidence, which is most often a byproduct of thinking "What would I do if I was evolution?", which is quite the fallacy.)

What does that mean for coming to a reflective equilibrium about ethics?

Are you talking about CEV? Civilization as we know it will end long before people agree about metaethics.

Before CEV, we have to do a rough estimate of our personal extrapolated volition so we know what to do. One way to do this is to extrapolate our volition as far as we can see by, e.g., thinking about ethics.

I intuitively feels that X is good and Y is bad. I believe morality will mostly fit my intuitions. I believe morality will be simple. I know my intuitions, in this case, are pretty stupid. I can't find a simple system that fits my intuitions here. What should I do? How much should I suck up and take the counterintuitiveness? How much should I suck up and take complex morality?

These are difficult questions.

The nonlinear utility of money?

Well, the point I was trying to make was supposed to be abstract and general. Nick Bostrom's Pascal's Mugging piece argues for a very similar (if not identical) point. Thanks for letting me know about this.

And yes, I'm bad at dealing with small probabilities. I feel that these evoke some philosophical questions about the nature of probability in general - or whatever we talk about when we talk about probabilities.

[-]ata50

Does it make sense to speak of probabilities only when you have numerous enough trials?

No, the math of probability theory still works if you take probabilities as subjective degrees of belief. That is the foundation of Bayesianity, but even the frequency interpretation depends on subjective ignorance — if you had full knowledge of all information influencing the outcome of a given trial, you wouldn't be doing the trial, because you could predict the result. It depends on isolating certain causal factors and mind-projecting them as "random variables". In reality, they're not random — you just don't know what they are — and you can talk about your degree of knowledge about the result of 1 trial just as well as 1,000,000 trials.

Gamblers are maximizing expected utility, not expected cash. That is all.

It's not all. Pramipexole and other dopamine agonist medications can cause compulsive gambling in previous non-gamblers as a side effect. That makes me think that the thrill of gambling has something to do with the dopamine system and the design of the human risk/reward system, and that compulsive gambling probably has some kind of organic cause that you couldn't find in the pure mathematics of expected utility.

Or they're just irrational.

I find it useful when trying to understand the behaviour of other human beings to start out by assuming that they are basically (imperfectly) rational but may have different values from me. It invokes less of a warm glow of smug superiority but generally leads to more accurate predictions.

I find it useful when trying to understand the behaviour of other human beings to start out by assuming that they are basically (imperfectly) rational but may have different values from me.

So do I. I then look at the evidence and discover they're just irrational.

Seriously, most people don't lose hundreds or thousands of dollars in a few hours at a casino just for the enjoyment. They want money and they expect to win some.

Seriously, most people don't lose hundreds or thousands of dollars in a few hours at a casino just for the enjoyment. They want money and they expect to win some.

mattnewport was talking about gamblers, you're talking about the (small?) subset of irrational gamblers.

The real question can be solved by empiricism; anyone heading to Vegas soon and willing to do a survey? Ask: A) Do you believe that you will leave the casino with more money than you started? B) If you don't leave the casino richer, do you expect the experience to be satisfying anyway? (Except do a better job of optimizing the questions for clarity.) Ask a few hundred people, get some free drinks from the casinos, publish your results in an economics journal or a cognitive biases journal, present your findings to Less Wrong, get karma, die happy.

Hey, I'll do the survey on me:

A: Yes. Of course, if I do go to Vegas soon, that's a fait accompli (I bet on the Padres to win the NL and the Reds to win the World Series, among other bets.)

But in general, yes. I expect to win on the bets I place. I go to Las Vegas with my wife to play in the sun and see shows and enjoy the vibe, but I go one week a year by myself to win cash money.

B. If I come back a loser, the experience can still be OK. But I'm betting sports and playing poker, and I expect to win, so it's not quite so fun to lose. That said, a light gambling win - not enough to pay for the hotel, say - leaving me down considering expenses gives me enough hedons to incentivize coming back.

--JRM

[-]ata00

If you don't leave the casino richer, do you expect the experience to be satisfying anyway?

Even if you're optimizing for enjoyment and satisfaction and fun, gambling isn't necessarily a great way to do that. Another good question to ask subjects who answer "yes" to questions A and B would be "How much money would you be willing to lose at the casino before that starts to outweigh your enjoyment of the experience?" or "How much money would you be willing to lose at the casino before you'd regret not choosing something that is (in your estimation) a more cost-effective route to the same amount of enjoyment?"

Those are good questions, and on Less Wrong I wouldn't be hesitant to ask them, but I figured they'd be beyond the ability of the average person to really think about. In my experience getting people to fill out surveys, they easily get indignant and frustrated when they can't understand a question or, perhaps more importantly, the possible motives behind the question. ("Is he trying to make me look like a fool? What an ass, trying to get status over me with his nerdy smarts!") Even if they did understand the question, I'd doubt their answer would be at all reflectively consistent; significantly less so than the answers to the other two questions.

Taking into account my other comment, I think that perhaps it'd be best to ask the less informative but much simpler question "How much money have you set aside for gambling today?" before the other two questions.

Most people at casinos are not problem gamblers, just as most people who drink are not problem drinkers. I know plenty of people (myself included) who gamble on occasion for fun but understand the odds.

[-]ata30

More importantly, "x is being irrational" can be a fake explanation if it's given without further detail. Much better to point to a specific fallacy or bias that would explain their behaviour.

In this particular case, though, how is it a matter of "different values"? Would anybody participate in casino-style gambling if they were better at thinking about probabilities and utilities?

I have gambled in a casino or the like exactly once in my adult life, when on a cruise I had a quarter, 25 cents, which I did not wish to carry around with me for the rest of the week. So I decided to "try my luck" at the quarter-push machine in the casino. I did not win anything, but being able to tell that story was worth every penny.

I think it's hard to enjoy gambling if you are sure you'll lose money, which is how I feel like. I may be over pessimistic.

Roulette gives you odds of 1.111 to 1 if you place on Red or Black with expectation -0.053 on the dollar. So I may be over-pessimistic. See the wiki entry.

I think it's hard to enjoy gambling if you are sure you'll lose money, which is how I feel like. I may be over pessimistic.

Typical Mind Fallacy.

Don't get over-excited. You are still losing money in a less than fair-odds situation.

And since most people don't stop gambling until they have some deficit from gambling, casinos usually make more than the odds give them.

Thanks, I already knew about this.

Related is also Martingale gambling.

Neato! Worth reading!

Suppose someone offers you a (single trial) gamble C in which you stand to gain a nickel with probability 0.95 and stand to lose an arm and a leg with probability 0.05. Even though expectation is (-0.05arm -0.05leg + 0.95nickel), you should still take the gamble since the probability of winning on a single trial is very high - 0.95 to be exact.

Non-sarcastic version: Losing $100M is much worse than gaining $100K is good, regardless of utility of money being nonlinear. This is something you must consider, rather than looking at just the probabilities - so you shouldn't take gamble A. This is easier to see if you formulate the problems with gains and losses you can actually visualize.

Is the problem that 0.01 or 0.05 too high?

Take a smaller value then.

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

In fact, people take such gambles (with negative expectation but with high probability of winning) everyday.

They fly on airplanes and drive to work.

In our world people do not place infinite value on their own lives.

There is nothing in what I wrote that implies people value their lives infinitely. People just need to value their lives highly enough such that flying on an airplane (with its probability of crashing) has a negative expected value.

Again, from Nick Bostrom's article:

"Pascal: I must confess: I’ve been having doubts about the mathematics of infinity. Infinite values lead to many strange conclusions and paradoxes. You know the reasoning that has come to be known as ‘Pascal’s Wager’? Between you and me, some of the critiques I’ve seen have made me wonder whether I might not be somehow confused about infinities or about the existence of infinite values . . .

Mugger: I assure you, my powers are strictly finite. The offer before you does not involve infinite values in any way. But now I really must be off; I have an assignation in the Seventh Dimension that I’d rather not miss. Your wallet, please!"

There is nothing in what I wrote that implies people value their lives infinitely. People just need to value their lives highly enough such that flying on an airplane (with its probability of crashing) has a negative expected value.

Yes, that is the point.

Your claim that people flying on planes are engaging in an activity that has negative expected value flatly contradicts standard economic analysis and yet provides no supporting evidence to justify such a wildly controversial position. The only way your claim could be true in general would be if humans placed infinite value on their own lives. Otherwise it depends on details of why they are flying and what value they expect to gain if they arrive safely and on the actual probability of a fatal incident.

Since you didn't mention in your original post under what circumstances your claim holds true you did imply that you were making a general claim and thus further imply that people value their lives infinitely.

You can't have your cake and eat it too. If the probability is low enough, or the penalty mild enough, that the rational action is to take the gamble, then necessarily the expected utility will be positive.

Taking your driving example, if I evaluate a day of work as 100 utilons, my life as 10MU, and estimate the probability to die while driving to work as 1/M, then driving to work has an expected gain of 90U.

As others have said, maximize expected utility, not expected dollars. Money being roughly logarithmic in value works pretty well, and the common advice is to pick gambles that maximize your expected log-net-worth.

For a more specific recommendation, see http://en.wikipedia.org/wiki/Kelly_criterion

As to your final question, the answer is "yes". Probability can be applied to any unknown. A good description is in the middle of the quantum mechanics sequence: http://lesswrong.com/lw/oj/probability_is_in_the_mind/