As Wei Dai has said, arguing about which probability is "right" is futile until you have fixed your decision theory and goals that will actually make use of use those probabilities to act. In most use-cases of probability theory, such issues don't come up.
In sleeping beauty, you are in a situation where such considerations do matter.
If we further specify that sleeping beauty can make a bet and (if she wins) will get the money straight away on Monday, and be allowed to spend it straight away on eating a chocolate bar, and will then be put to sleep again (if the coin came up tails), woken up on Tuesday and be given the same money again and allowed to eat another chocolate bar, then she will do best by saying that the probability of tails is 2/3.
But if we specify that the money will be put into an account (and she will only be paid one winning) that she can spend after the experiment is over, which is next week, then she will find that 1/2 is the "right" answer
In the sleeping beauty problem, whether the 2/3 or 1/2 is "right" is just a debate about words. The real issue is what kind of many-instance decision algorithm you are running.
EDIT: Another way of putting this would be to simply abandon the concept of probability altogether and use something like UDT. Probability theory doesn't work in cases where you have multiple instances of your decision algorithm running.
Add a payoff and the answer becomes clear, and it also becomes clear that the answer depends entirely on how the payoff works.
Without a payoff, this is a semantics problem revolving around the ill-defined concept of expectation and will continue to circle it endlessly.
There is no payoff involved. Introducing a payoff only confuses matters.
I define subjective probability in terms of what wagers I would be willing to make. I think a good rule of thumb is that if you can't figure out how to turn the problem into a wager you don't know what you're asking. And, in fact, when we introduce payoffs to this problem it becomes extremely clear why we get two answers. The debate then becomes a definition debate over what wager we mean by the sentence "what credence should the patient assign..."
This is one of those cases where we need to disentangle the dispute over definitions (1), forget about the notion of subjective anticipation (2), list the well-defined questions and ask which we mean.
If by the probability we mean the fraction of waking moments, the answer is 1/3.
If by the probability we mean the fraction of branches, the answer is 1/2.
As Wei Dai and Roko have observed, that depends on why you're asking in the first place. Probability estimates should pay rent in correct decisions. If you're making a bet that will pay off once at the end of the experiment, you should count the fraction of branches. If you're making a bet that will pay off once per wake-up call, you should count the fraction of wake-up calls.
The coverage on http://en.wikipedia.org/wiki/Sleeping_Beauty_problem seems much less confused than this post.
Beauty just knows that she'll win the bet twice if tails landed. We double count for tails...
That doesn't mean your credence for heads is 1 -- it just means I added a greater penalty to the other option.
You don't need a monetary reward for this reasoning to work. It's a funny ambiguity, I think, in what 'credence' means. Intuitively, a well-calibrated person A should assign a probability of P% to X iff X happens on P% of the occasions where A assigned a P% probability to X.
If we accept this, then clearly 1/3 is correct. If we run this experiment mul...
A reasonable an idea for this and other problems that don't' seem to suffer from ugly asymptotics would simply to mechanically test it.
That is to say that it may be more efficient, requiring less brain power, to believe the results of repeated simulations. After going through the Monty Hall tree and statistics with people who can't really understand either, then end up believing the results of a simulation whose code is straightforward to read, I advocate this method--empirical verification over intuition or mathematics that are fallible (because you yourself are fallible in your understanding, not because they contain a contradiction).
For my own benefit, i'll try to explain my thinking on this problem, in my own words, because the discussions here are making my head spin. Then the rest of you can tell me whether i understand. The following is what i reasoned out before looking at neq1's explanations.
Firstly, before the experiment begins, i'd expect a 50% chance of heads and a 50% chance of tails. Simple enough.
If it lands on heads, then i wake up only once, on Monday. If it lands on tails, then i wake up once on Monday, and a second time on Tuesday.
So, upon waking with amnesia, i'd ...
Please insert a section break near the start of this post, so the whole thing doesn't show up on "NEW".
The 1/3 argument says with heads there is 1 interview, with tails there are 2 interviews, and therefore the probability of heads is 1/3. However, the argument would only hold if all 3 interview days were equally likely. That's not the case here. (on a wake up day, heads&Monday is more likely than tails&Monday, for example).
Um... why? There are the same number of heads&Monday as tails&Monday; why would heads&Monday be more likely?
So, I'm still working on this in my plodding, newbie-at-probability-math fashion.
What I took away from my exchanges with AlephNeil is that I get the clearest picture if I think in terms of a joint probability distribution, and attempt to justify mathematically each step of my building the table, as well as the operations of conditioning and marginalizing.
In the original Sleeping Beauty problem, we have three variables: x is how the coin came up {heads, tails}, y is the day of the week {monday, tuesday}, and z is whether I am asked for my credence (i.e. wok...
Variation Alpha:
10 people. If heads, one of the ten is randomly selected to be revived. If tails, all ten are revived. (If you like, suppose that the ten are revived one at a time on consecutive days - but it doesn't make any difference.)
Variation Beta:
Same as Alpha except the 10 people are clones of yours, with mental state identical to your own.
Variation Gamma:
Same as Beta except the cloning is done after you fall asleep.
Variation Delta:
Same as Gamma except that the way the clones are not created all at once. Rather, successive clones are created on subs...
I have a question for those more familiar with the discussions surrounding this problem: is there anything really relevant about the sleeping/waking/amnesia story here? What if instead the experimenter just went out and asked the next random passerby on the street each time?
It seems to me that the problem could be formulated less confusingly that way. Am I missing something?
I agree with the others about worrying about the decision theory before talking about probability theory that includes indexical uncertainty, but separately I think there's an issue with your calculation.
"P(Beauty woken up at least once| heads)=P(Beauty woken up at least once | tails)=1"
Consider the case where a biased quantum coin is flipped and the people in 'heads' branches are awoken in green rooms while the 'tails' branches are awoken in red rooms.
Upon awakening, you should figure that the coin was probably biased to put you there. However...
Just as a better intuition pump, we can imagine the "really extreme" sleeping beauty problem.
Omega rolls a d20
If it comes up a "1", then beauty is woken 400 times, otherwise she is woken once only. Questions:
but what if:
The OP is correct. There are actually all the same issues here as with the Self Indication Assumption; it is wrong for the same reasons as the 1/3 probability. I predict that a great majority of those who accept SIA will also favor the probability of 1/3.
Sleeping Beauty does not sleep well. She has three dreams before awakening. The Ghost of Mathematicians Past warns her that there are two models of probability, and that adherents to each have little that is good to say about adherents to the other. The Ghost of Mathematicians Present shows her volumes of papers and articles where both 1/2 and 1/3 are "proven" to be the correct answer based on intuitive arguments. The Ghost of Mathematicians Future doesn't speak, but shows her how reliance on intuition alone leads to misery. Only strict adherence...
Proof that neq1 is wrong:
Let H be the event that heads was flipped in this experiment instance. We're going to let Beauty experience a waking now. Let M be the event that the waking is on Monday. Let B be the information that Beauty (knowing the experiment design) has upon waking. Let h=P(H|B), and let m=P(M|B).
We wish to discover the true values of h and m. Clearly in the context of someone being asked about the expected outcome of the experiment, P(H)=1/2, but h may (or may not) differ from 1/2.
Fact 1: P(H|M,B)=P(H)=1/2
Fact 2: P(H|~M,B)=0 (by
Just an observation: I've mostly ignored this discussion, but it appears to have generated a lot of meaningful debate about the very fundamental epistemic issues at play (though a lot of unproductive debate as well). No consensus on which position is idiotic has apparently arisen.
With that in mind, surely this article should be rated above 1? Are the upvotes being canceled by downvotes, or are people just not voting it either way? Why isn't this rated higher?
After tinkering with a solution, and debating with myself how or whether to try it again here, I decided to post a definitive counter-argument to neq1's article as a comment. It starts with the correct probability tree, which has (at least) five outcomes, not three. But I'll use the unknown Q for one probability in it:
••••••• Monday---1---Waken; Pr(observe Heads and Monday)=Q/2 ••••••••••/ ••••••••Q •••••••/ ••• Heads •••••/••\••••••••••••1---Sleep; Pr(sleep thru Heads and Tuesday)=(1-Q)/2 ••••/•••1-Q•••••••/ ••1/2••••\••••••••/ ••/•••• Tuesday--0---Waken;...
The whole anthropics debate is over things that you have taken as assumptions e.g. whether waking up is identical evidence to merely knowing that you wake at least once, whether the three days are equally likely
Your update doesn't solve the problem. It's a semantic issue about what credence we are being asked. If we are being asked about the probability of our coin flip associated with this iteration of the experiment, then the answer is 1/2. If we are being asked about the probability of the coin flip associated with this particular awakening, then it must be 1/3.
You say that you must use cell counts of 500,250,250, but the fact is that if you repeat the experiment 1000 times, sleeping beauty will be awoken 1500 times, not 1000. So what are you doing with th...
It doesn't make sense to assert that probability of Tuesday is 1/4 (in the sense that it'd take a really bad model to give this answer). Monday and Tuesday of the "tails" case shouldn't be distinct elements of the sample space. What happens when you've observed that "it's not Tuesday", and the next day it's Tuesday? Have you encountered an event of zero probability? This is exactly the same reason why the solution of 1/3 can't be backed up by a reasonable model.
In the classical possible worlds model, you've got two worlds for each outc...
By the way, you may have noticed that the wiki has an article on the Sleeping Beauty problem. Also, it's been referenced before Sleeping beauty gets counterfacutally mugged in a top-level post, and it was mentioned in the context of a general solution in The I-Less Eye. And the comments on How many LHC failures is too many are relevant to the problem too.
Did you even search to see if someone had done a post on this topic before?
There is a difference between P("Heads came up") and P("Heads came up" given that "I was just woken up"). Since you will be woken up (memory-less) multiple times if tails came up, the fact that you are just getting woken up gives you information and increases the probability that tails came up.
Let's consider P(H | JustWoken) = P(H and Monday | JustWoken) + P(H and Tuesday | JustWoken) Because I have no information about the scientist's behavior (when he chooses to ask the question), I have to assign equal probabilities (one th...
Robert Wiblin - Thoughts on the sleeping beauty problem
http://robertwiblin.wordpress.com/2010/03/26/news-flash-multiverse-theory-proven-right/
this is a probability tree corresponding to an arbitrary wake up day
Huh? If tails, then Beauty is (always) woken on Monday. Why do you have probability=1/2 there?
(likewise for Tuesday)
If we want to replicate the situation 1000 times, we shouldn't end up with 1500 observations. The correct way to replicate the awakening decision is to use the probability tree I included above. You'd end up with expected cell counts of 500, 250, 250, instead of 500, 500, 500.
Beauty ends up with 1500 observations on average (maybe as few as 1000 or as many as 2000). Imagine a sequence of Beauty-observations in (H|TT)^1000 , where by r^1000 I mean 1000 repetitions of r. This string is from 1000-2000 letters long.
If you consider the scenario from a non...
PhilGoetz writes:
I had expected that people would read posts and comments by other people, and take special note of comments by people who had a prior history of being right, and thereby improve their own accuracy.
I would like to do this. However, it's time consuming to sort through people's posts and see what they think. (You have to read carefully, because they may be critiquing a particular argument rather than the value 1/2 or 1/3 presented in the parent.) Would people mind stating their position on the Sleeping Beauty problem with a single sentence explaining the core detail of the argument that persuades them?
I don't follow your latest argument against thirders. You claim that the denominator
#(heads & monday) + #(tails & monday) + #(tails & tuesday)
counts events that are not mutually exclusive. I don't see this. They look mutually exclusive to me-- heads is exclusive of tails, and monday is exclusive of tuesday, Could you elaborate this argument? Where does exclusivity fail? Are you saying tails&monday is not distinct from tails&tuesday, or all three overlap, or something else?
You also assert that the denominator is not determined by...
Disclosure process 1: regardless of the result of the coin toss she will be informed it's Monday on Monday with probability 1
Under disclosure process 1, her credence of heads on Monday is still 1/2.
SB would start out with P(tails) = 1,000,001/1,000,002 and on being informed that it is monday would update:
P(tails | told monday) / P(heads | told monday)
= P(tails) / P(heads) * P(told monday | tails) / P(told monday | tails)
= (1,000,001/1,000,002)/(1/1,000,002) * (1 / 1,000,001) /(1)
= 1
The initial strong belief in tails is cancelled by the strong evi...
The next program works well:
R=Random(0,1) If R=0 SAY "P(R=0)=1/2" Elseif SAY "P(R=0)=1/2": SAY "P(R=0)=1/2" Endif
The next doesn't:
R=Random(0,1) If R=0 SAY "P(R=0)=1/3" Elseif SAY "P(R=0)=1/3": SAY "P(R=0)=1/3" Endif
Run it many times and you will clearly see, that the first program will be right, since it will be about the same number of cases when R will be 0 and the other cases when R will be 1.
Just what the first program keep saying.
I'm not convinced that 1/2 is the right answer. I actually started out thinking it was obviously 1/2, and then switched to 1/3 after thinking about it for a while (I had thought of Bostrom's variant (without the disclosure bit) before I got to that part).
Let's say we're doing the Extreme version, no disclosure. You're Sleeping Beauty, you just woke up, that's all the new information you have. You know that there are 1,000,001 different ways this could have happened. It seems clear that you should assign tails a probability of 1,000,000/1,000,001.
Now I'll go think about this some more and probably change my mind a few more times.
I agree with the author of this article. After having done a lot of research on the Sleeping Beauty Problem as it was the topic of my bachelor's thesis (philosophy), I came to the conclusion that anthropic reasoning is wrong in the Sleeping Beauty Problem. I will explain my argument (shortly) below:
The principle that Elga uses in his first paper to validate his argument for 1/3 is an anthropic principle he calls the Principle of Indifference:
"Equal probabilities should be assigned to any collection of indistinguishable, mutually exclusive and exhausti...
If Sleeping Beauty doesn't know what day it is, what could possibly motivate her to say that the probability of heads is something other than 50%? I mean, she knows nothing about the coin except that it's round and shiny, and the metal costs more than the coin does.
Unless I misunderstood, this problem is smoke and mirrors.
I updated the post one more time. I think this time I more effectively explain where the thirder logic fails. Correct me if I'm wrong...
I updated the post. Thanks to the many interesting comments, I think I am now better able to describe why the 1/3 solution is wrong.
And to be clear, the main point of the post isn't to show that 1/2 is right, but to make the observation about how easy it is to be confident in the wrong answer when it comes to probability problems.
When it comes to probability, you should trust probability laws over your intuition. Many people got the Monty Hall problem wrong because their intuition was bad. You can get the solution to that problem using probability laws that you learned in Stats 101 -- it's not a hard problem. Similarly, there has been a lot of debate about the Sleeping Beauty problem. Again, though, that's because people are starting with their intuition instead of letting probability laws lead them to understanding.
The Sleeping Beauty Problem
On Sunday she is given a drug that sends her to sleep. A fair coin is then tossed just once in the course of the experiment to determine which experimental procedure is undertaken. If the coin comes up heads, Beauty is awakened and interviewed on Monday, and then the experiment ends. If the coin comes up tails, she is awakened and interviewed on Monday, given a second dose of the sleeping drug, and awakened and interviewed again on Tuesday. The experiment then ends on Tuesday, without flipping the coin again. The sleeping drug induces a mild amnesia, so that she cannot remember any previous awakenings during the course of the experiment (if any). During the experiment, she has no access to anything that would give a clue as to the day of the week. However, she knows all the details of the experiment.
Each interview consists of one question, "What is your credence now for the proposition that our coin landed heads?"
Two popular solutions have been proposed: 1/3 and 1/2
The 1/3 solution
From wikipedia:
Suppose this experiment were repeated 1,000 times. We would expect to get 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1/3.
Yes, it's true that only in a third of cases would heads precede her awakening.
Radford Neal (a statistician!) argues that 1/3 is the correct solution.
This [the 1/3] view can be reinforced by supposing that on each awakening Beauty is offered a bet in which she wins 2 dollars if the coin lands Tails and loses 3 dollars if it lands Heads. (We suppose that Beauty knows such a bet will always be offered.) Beauty would not accept this bet if she assigns probability 1/2 to Heads. If she assigns a probability of 1/3 to Heads, however, her expected gain is 2 × (2/3) − 3 × (1/3) = 1/3, so she will accept, and if the experiment is repeated many times, she will come out ahead.
Neal is correct (about the gambling problem).
These two arguments for the 1/3 solution appeal to intuition and make no obvious mathematical errors. So why are they wrong?
Let's first start with probability laws and show why the 1/2 solution is correct. Just like with the Monty Hall problem, once you understand the solution, the wrong answer will no longer appeal to your intuition.
The 1/2 solution
P(Beauty woken up at least once| heads)=P(Beauty woken up at least once | tails)=1. Because of the amnesia, all Beauty knows when she is woken up is that she has woken up at least once. That event had the same probability of occurring under either coin outcome. Thus, P(heads | Beauty woken up at least once)=1/2. You can use Bayes' rule to see this if it's unclear.
Here's another way to look at it:
If it landed heads then Beauty is woken up on Monday with probability 1.
If it landed tails then Beauty is woken up on Monday and Tuesday. From her perspective, these days are indistinguishable. She doesn't know if she was woken up the day before, and she doesn't know if she'll be woken up the next day. Thus, we can view Monday and Tuesday as exchangeable here.
A probability tree can help with the intuition (this is a probability tree corresponding to an arbitrary wake up day):
If Beauty was told the coin came up heads, then she'd know it was Monday. If she was told the coin came up tails, then she'd think there is a 50% chance it's Monday and a 50% chance it's Tuesday. Of course, when Beauty is woken up she is not told the result of the flip, but she can calculate the probability of each.
When she is woken up, she's somewhere on the second set of branches. We have the following joint probabilities: P(heads, Monday)=1/2; P(heads, not Monday)=0; P(tails, Monday)=1/4; P(tails, Tuesday)=1/4; P(tails, not Monday or Tuesday)=0. Thus, P(heads)=1/2.
Where the 1/3 arguments fail
The 1/3 argument says with heads there is 1 interview, with tails there are 2 interviews, and therefore the probability of heads is 1/3. However, the argument would only hold if all 3 interview days were equally likely. That's not the case here. (on a wake up day, heads&Monday is more likely than tails&Monday, for example).
Neal's argument fails because he changed the problem. "on each awakening Beauty is offered a bet in which she wins 2 dollars if the coin lands Tails and loses 3 dollars if it lands Heads." In this scenario, she would make the bet twice if tails came up and once if heads came up. That has nothing to do with probability about the event at a particular awakening. The fact that she should take the bet doesn't imply that heads is less likely. Beauty just knows that she'll win the bet twice if tails landed. We double count for tails.
Imagine I said "if you guess heads and you're wrong nothing will happen, but if you guess tails and you're wrong I'll punch you in the stomach." In that case, you will probably guess heads. That doesn't mean your credence for heads is 1 -- it just means I added a greater penalty to the other option.
Consider changing the problem to something more extreme. Here, we start with heads having probability 0.99 and tails having probability 0.01. If heads comes up we wake Beauty up once. If tails, we wake her up 100 times. Thirder logic would go like this: if we repeated the experiment 1000 times, we'd expect her woken up 990 after heads on Monday, 10 times after tails on Monday (day 1), 10 times after tails on Tues (day 2),...., 10 times after tails on day 100. In other words, ~50% of the cases would heads precede her awakening. So the right answer for her to give is 1/2.
Of course, this would be absurd reasoning. Beauty knows heads has a 99% chance initially. But when she wakes up (which she was guaranteed to do regardless of whether heads or tails came up), she suddenly thinks they're equally likely? What if we made it even more extreme and woke her up even more times on tails?
Implausible consequence of 1/2 solution?
Nick Bostrom presents the Extreme Sleeping Beauty problem:
This is like the original problem, except that here, if the coin falls tails, Beauty will be awakened on a million subsequent days. As before, she will be given an amnesia drug each time she is put to sleep that makes her forget any previous awakenings. When she awakes on Monday, what should be her credence in HEADS?
He argues:
The adherent of the 1/2 view will maintain that Beauty, upon awakening, should retain her credence of 1/2 in HEADS, but also that, upon being informed that it is Monday, she should become extremely confident in HEADS:
P+(HEADS) = 1,000,001/1,000,002
This consequence is itself quite implausible. It is, after all, rather gutsy to have credence 0.999999% in the proposition that an unobserved fair coin will fall heads.
It's correct that, upon awakening on Monday (and not knowing it's Monday), she should retain her credence of 1/2 in heads.
However, if she is informed it's Monday, it's unclear what she conclude. Why was she informed it was Monday? Consider two alternatives.
Disclosure process 1: regardless of the result of the coin toss she will be informed it's Monday on Monday with probability 1
Under disclosure process 1, her credence of heads on Monday is still 1/2.
Disclosure process 2: if heads she'll be woken up and informed that it's Monday. If tails, she'll be woken up on Monday and one million subsequent days, and only be told the specific day on one randomly selected day.
Under disclosure process 2, if she's informed it's Monday, her credence of heads is 1,000,001/1,000,002. However, this is not implausible at all. It's correct. This statement is misleading: "It is, after all, rather gutsy to have credence 0.999999% in the proposition that an unobserved fair coin will fall heads." Beauty isn't predicting what will happen on the flip of a coin, she's predicting what did happen after receiving strong evidence that it's heads.
ETA (5/9/2010 5:38AM)
If we want to replicate the situation 1000 times, we shouldn't end up with 1500 observations. The correct way to replicate the awakening decision is to use the probability tree I included above. You'd end up with expected cell counts of 500, 250, 250, instead of 500, 500, 500.
Suppose at each awakening, we offer Beauty the following wager: she'd lose $1.50 if heads but win $1 if tails. She is asked for a decision on that wager at every awakening, but we only accept her last decision. Thus, if tails we'll accept her Tuesday decision (but won't tell her it's Tuesday). If her credence of heads is 1/3 at each awakening, then she should take the bet. If her credence of heads is 1/2 at each awakening, she shouldn't take the bet. If we repeat the experiment many times, she'd be expected to lose money if she accepts the bet every time.
The problem with the logic that leads to the 1/3 solution is it counts twice under tails, but the question was about her credence at an awakening (interview).
ETA (5/10/2010 10:18PM ET)
Suppose this experiment were repeated 1,000 times. We would expect to get 500 heads and 500 tails. So Beauty would be awoken 500 times after heads on Monday, 500 times after tails on Monday, and 500 times after tails on Tuesday. In other words, only in a third of the cases would heads precede her awakening. So the right answer for her to give is 1/3.
Another way to look at it: the denominator is not a sum of mutually exclusive events. Typically we use counts to estimate probabilities as follows: the numerator is the number of times the event of interest occurred, and the denominator is the number of times that event could have occurred.
For example, suppose Y can take values 1, 2 or 3 and follows a multinomial distribution with probabilities p1, p2 and p3=1-p1-p2, respectively. If we generate n values of Y, we could estimate p1 by taking the ratio of #{Y=1}/(#{Y=1}+#{Y=2}+#{Y=3}). As n goes to infinity, the ratio will converge to p1. Notice the events in the denominator are mutually exclusive and exhaustive. The denominator is determined by n.
The thirder solution to the Sleeping Beauty problem has as its denominator sums of events that are not mutually exclusive. The denominator is not determined by n. For example, if we repeat it 1000 times, and we get 400 heads, our denominator would be 400+600+600=1600 (even though it was not possible to get 1600 heads!). If we instead got 550 heads, our denominator would be 550+450+450=1450. Our denominator is outcome dependent, where here the outcome is the occurrence of heads. What does this ratio converge to as n goes to infinity? I surely don't know. But I do know it's not the posterior probability of heads.