I don't see how that analysis is useful. Beauty is awake at the start and the end of the experiment, and she updates accordingly, depending on whether she believes she is "inside" the experiment or not. So, having D mean: "Sleeping Beauty is awake" does not seem very useful. Beauty's "data" should also include her knowledge of the experimental setup, her knowledge of the identity of the subject, and whether she is facing an interviewer with amnesia. These things vary over time - and so they can't usefully be treated as a single probability.
You should be careful if plugging values into Bayes' theorem in an attempt to solve this problem. It contains an amnesia-inducing drug. When Beauty updates, you had better make sure to un-update her again afterwards in the correct manner.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
As I said, be careful about using Bayes' theorem in the case where the agent's mind is being meddled with by amnesia-inducing drugs. If Beauty had not had her mind addled by drugs, your formula would work - and p(H|D) would be equal to 1/2 on her first awakening. As it is, Beauty has lost some information that pertains to the answer she gives to the problem - namely the knowledge of whether she has been woken up before already - or not. Her uncertainty about this matter is the cause of the problem with plugging numbers into Bayes' theorem.
The theorem models her update on new information - but does not model the drug-induced deletion from her mind of information that pertains to the answer she gives to the problem.
If she knew it was Monday, p(H|D) would be about 1/2. If she knew it was Tuesday, p(H|D) would be about 0. Since she is uncertain, the value lies between these extremes.
Is over-reliance on Bayes' theorem - without considering its failure to model the problem's drug-induced amnesia - a cause of people thinking the answer to the problem is 1/2, I wonder?
If I understand rightly, you're happy with my values for p(H), p(D) and p(D|H), but you're not happy with the result. So you're claiming that a Bayesian reasoner has to abandon Bayes' Law in order to get the right answer to this problem. (Which is what I pointed out above.)
Is your argument the same as the one made by Bradley Monton? In his paper Sleeping Beauty and the forgetful Bayesian, Monton argues convincingly that a Bayesian reasoner needs to update upon forgetting, but he doesn't give a rule explaining how to do it.
Naively, I can imagine doing this by putting the reasoner back in the situation before they learned the information they forgot, and then updating forwards again, but omitting the forgotten information. (Monton gives an example on pp. 51–52 where this works.) But I can't see how to make this work in the Sleeping Beauty case: how do I put Sleeping Beauty back in the state before she learned what day it is?
So I think the onus remains with you to explain the rules for Bayesian forgetting, and how they lead to the answer ⅓ in this case. (If you can do this convincingly, then we can explain the hardness of the Sleeping Beauty problem by pointing out how little-known the rules for Bayesian forgetting are.)