What is going to be done with these numbers? If Sleeping Beauty is to gamble her money, she should accept the same betting odds as a thirder. If she has to decide which coinflip result kills her, she should be ambivalent like a halfer.
I mean I think the "gamble her money" interpretation is just a different question. It doesn't feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.
The second question feels closer to actually an accurate interpretation of what probability means.
It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is .
I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about because in about of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.
There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is . This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we're talking logical uncertainty rather than probability, and this distinction is how you can answer in Sleeping Beauty without being forced to answer on the Presumptuous Philosopher problem.)
So RE this being the only "good" definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can't prove this. I'm not aware of any case where I want to define probability differently, anyway.
The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.
This is, I think, the key thing that those smart people disagree with you about.
Suppose Alice and Bob are sitting in different rooms. Alice flips a coin and looks at it - it's Heads. What is the probability that the coin is Tails? Obviously, it's 0% right? That's just a fact about the coin. So I go to Bob in the other room and and ask Bob what's the probability the coin is Tails, and Bob tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the coin. Since it was already flipped the probability was already either 0% or 100%, and maybe if you didn't know which it was you should just say you can't assign a probability or something."
Now, suppose there are two universes that differ only by the polarization of a photon coming from a distant star, due to hit Earth in a few hours. And I go into the universe where that polarization is left-handed (rather than right-handed), and in that universe the probability that the photon is right-handed is 0% - it's just a fact about the photon. So I go to the copy of Carol that lives in this universe and ask Carol what's the probability the photon has right-handed polarization, and Carol tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the photon. Since it's already on its way the probability was already either 0% or 100%, and maybe if you don't know which it was you should just say you can't assign a probability or something."
Now, suppose there are two universes that differ outside of the room that Dave is currently in, but are the same within Dave's room. Say, in one universe all the stuff outside the room is arranged is it is today in our universe, while in the other universe all the stuff outside the room is arranged as it was ten years ago. And I go into the universe where all the stuff outside the room is arranged as it was ten years ago, which I will shorthand as it being 2014 (just a fact about calendars, memories, the positions of galaxies, etc.), and ask Dave what's the probability that the year outside is 2024, and Dave tells me it's 50%...
I mean I am not convinced by the claim that Bob is wrong.
Bob's prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.
I don't favour an objective notion of probabilities. From my OP:
...2. Bayesian Reasoning
- Probability is a property of the map (agent's beliefs), not the territory (environment).
- For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
- The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
- The o
Preamble
Motivation
I was recently introduced to the canonical Sleeping Beauty problem and initially was a halfer but confused. Or more like I thought the halfer position was correct, but smart people seemed to be thirders and I was worried I was misunderstanding something about the problem, or confused myself or similar.
I debated the problem extensively on the LW Discord server and with some LLMs and strongly updated towards "thirders are just engaging in gross epistemic malpractice".
A message I sent in the LW server:
I still have some meta level uncertainty re: the nonsense allegations.
I want to be convinced that the thirder position is not nonsense and there is a legitimate disagreement/debate to be had here.
I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.
I have not yet read Joe Carlsmith's blog posts or Nick Bostrom's book as at posting this question. I'll probably be listening to them after posting the question.
I asked Sonnet 3.5 to distill my position/rejections from our debate and below is its summary[1]
Comprehensive Position on the Sleeping Beauty Problem
1. Core Position
2. Bayesian Reasoning
3. Frequentist Critique
Key Issues with Frequentist Approach:
Misunderstanding Indistinguishable Events:
Conflating Processes with Outcomes:
Misapplying Frequentist Logic:
Ignoring Problem Structure:
Counterargument to Thirder Position:
4. Self-Locating Beliefs
5. Anthropic Reasoning Rejection
Expanded Argument:
Specific Anthropic Argument and Counterargument:
6. Distinguishability vs. Probability
7. Betting Strategies vs. Probabilities
Expanded Argument:
Specific Example:
Analogy to Clarify:
Counterargument to Thirder Position:
8. Counterfactuals and Different Problems
9. Information Relevance
10. Epistemological Stance
11. Common Thirder Arguments Addressed
12. Meta-level Considerations
13. Openness to Counter-Arguments
This position maintains that the Sleeping Beauty problem, when correctly analyzed using Bayesian principles, does not provide any new information that would justify updating the prior 50/50 probability of the coin flip. It challenges readers to present counter-arguments that do not rely on commonly rejected reasoning patterns and that strictly adhere to Bayesian updating based on genuinely new, discriminatory evidence.
Closing Remarks
I am probably unjustified in my arrogance.
Some people who I strongly respect (e.g. Nick Bostrom) are apparently thirders.
This is IMO very strong evidence that I am actually just massively misunderstanding something or somehow mistaken here (especially as I have not yet engaged with Nick Bostrom's arguments as at the time of writing this post).
On priors I don't really expect to occupy an (on reflection endorsed) epistemic state where I think Nick Bostrom is making a basic epistemology mistake.
So I expect this is a position I can be easily convinced out of/I myself am misunderstanding something fundamental about the problem.
I made some very light edits to the probability/odds treatment in point 7 to resolve factual inaccuracies. ↩︎