Preamble

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.

In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"


Motivation

I was recently introduced to the canonical Sleeping Beauty problem and initially was a halfer but confused. Or more like I thought the halfer position was correct, but smart people seemed to be thirders and I was worried I was misunderstanding something about the problem, or confused myself or similar.

I debated the problem extensively on the LW Discord server and with some LLMs and strongly updated towards "thirders are just engaging in gross epistemic malpractice".

A message I sent in the LW server:

Alternatively I started out confused.

Debating this problem here and with LLMs convinced me that I'm not confused and the thirders are actually just doing epistemological nonsense.

It feels arrogant, but it's not a poor reflection of my epistemic state?

I still have some meta level uncertainty re: the nonsense allegations.

I want to be convinced that the thirder position is not nonsense and there is a legitimate disagreement/debate to be had here.

I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.

I have not yet read Joe Carlsmith's blog posts or Nick Bostrom's book as at posting this question. I'll probably be listening to them after posting the question.

I asked Sonnet 3.5 to distill my position/rejections from our debate and below is its summary[1]


Comprehensive Position on the Sleeping Beauty Problem

1. Core Position

  • The correct answer to the Sleeping Beauty problem is the Halfer position (1/2 probability for Heads).
  • The Thirder position (1/3 probability for Heads) is based on misapplications of both Bayesian and frequentist reasoning.

2. Bayesian Reasoning

  • Probability is a property of the map (agent's beliefs), not the territory (environment).
  • For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
  • The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
  • The original 50/50 probability should remain unchanged after waking up.

3. Frequentist Critique

  • The Thirder position often relies on a misapplication of frequentist probability.

Key Issues with Frequentist Approach:

  1. Misunderstanding Indistinguishable Events:

    • Thirders wrongly treat multiple indistinguishable wake-ups as distinct evidence.
    • Beauty's subjective experience is identical whether woken once or a million times.
  2. Conflating Processes with Outcomes:

    • Two mutually exclusive processes (Heads: one wake-up, Tails: multiple wake-ups) are incorrectly treated as a single sample space.
    • Multiple Tails wake-ups collapse into one indistinguishable experience.
  3. Misapplying Frequentist Logic:

    • Standard frequentist approach increases sample size with multiple observations.
    • This logic fails here as wake-ups are not independent data points.
  4. Ignoring Problem Structure:

    • Each experiment (coin flip + wake-ups) is one trial.
    • The coin's 50/50 probability remains unchanged regardless of wake-up protocol.

Counterargument to Thirder Position:

  • Thirder Claim: "Beauty would find herself in a Tails wake-up twice as often as a Heads wake-up."
  • Rebuttal: This incorrectly treats each wake-up as a separate trial, rather than considering the entire experiment as one trial.

4. Self-Locating Beliefs

  • Self-locating information (which wake-up you're experiencing) is irrelevant to the coin flip probability.
  • The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.

5. Anthropic Reasoning Rejection

  • Anthropic arguments that treat all possible wake-ups as equally likely samples are rejected.
  • This approach incorrectly combines outcomes from distinct events (coin flip and wake-up protocol).

Expanded Argument:

  • Anthropic reasoning in this context suggests that Beauty should consider herself as randomly selected from all possible wake-up events.
  • This reasoning is flawed because:
    1. It treats the wake-up events as the primary random process, when the actual random process is the coin flip.
    2. It conflates the sampling process (how Beauty is woken up) with the event we're trying to determine the probability of (the coin flip).

Specific Anthropic Argument and Counterargument:

  • Anthropic Argument: "When Beauty wakes up, she is essentially sampling from the space of all possible wake-ups. There are twice as many Tails wake-ups as Heads wake-ups, so the probability of Heads is 1/3."
  • Counterargument:
    1. This incorrectly assumes that each wake-up is an independent event, when they are actually dependent on a single coin flip.
    2. It ignores the fact that the probability we're interested in is that of the coin flip, not the wake-up event.
    3. This reasoning would lead to absurd conclusions if we changed the wake-up protocol (e.g., waking Beauty a million times for Tails would make Heads virtually impossible, which is clearly wrong).

6. Distinguishability vs. Probability

  • Subjective indistinguishability of events doesn't imply equal probability of the underlying states.
  • However, indistinguishability means the events can't provide evidence for updating probabilities.

7. Betting Strategies vs. Probabilities

  • Optimal betting strategies (e.g., always bet on Tails) don't necessarily reflect true probabilities.
  • Asymmetric payoffs can justify betting on Tails without changing the underlying 50/50 probability.

Expanded Argument:

  • The Sleeping Beauty problem presents a scenario where the optimal betting strategy (always betting on Tails) seems to contradict the claimed 50/50 probability. This apparent contradiction is resolved by recognizing that:
    1. Betting strategies can be influenced by factors other than pure probability, such as payoff structures.
    2. The expected value of a bet is not solely determined by the probability of an event, but also by the payoff for each outcome.
    3. In this case, the Tails outcome provides more opportunities to bet, creating an asymmetry in the payoff structure.

Specific Example:

  • Consider a simplified version of the problem where:
    • If the coin lands Heads, Beauty is woken once and can bet $1.
    • If the coin lands Tails, Beauty is woken twice and can bet $1 each time.
    • The payoff for a correct bet is 1:2 (you double your money).
  • The optimal strategy is to always bet on Tails, because:
    • Betting on Heads: 50% chance of winning $1, 50% chance of losing $1 = $0.5 - $0.5 = $0 expected value
    • Betting on Tails: 50% chance of winning $2 (betting twice) vs 50% chance of losing $1 = $1 - $0.5 = $0.5 expected value
  • However, this doesn't mean the probability of Tails is higher. It's still 50%, but the payoff structure makes betting on Tails more profitable.

Analogy to Clarify:

  • Imagine a fair coin flip where you're offered the following bet:
    • If you bet on Heads and win, you get $1.
    • If you bet on Tails and win, you get $K (where K >> 1, i.e., K is much larger than 1).
  • The optimal strategy is to bet on Tails every time, even though the coin is fair (50/50).
  • If you repeat this experiment many times, always betting on Tails will be a winning strategy in the long run.
  • Despite this, the probability of the coin landing Heads remains 0.5 (50%).

Counterargument to Thirder Position:

  • Thirders might argue: "The optimal betting strategy aligns with the 1/3 probability for Heads."
  • Rebuttal: This confuses expected value with probability. The betting strategy is optimal due to the asymmetric nature of the payoffs (betting twice on Tails vs. once on Heads), not because Tails is more likely. The underlying probability of the coin flip remains 50/50, regardless of the betting structure.

8. Counterfactuals and Different Problems

  • Arguments involving additional information change the problem fundamentally.
  • "X & Y is evidence for H, therefore X is evidence for H" is invalid reasoning.

9. Information Relevance

  • Not all information about the experimental setup is relevant for probability calculations.
  • The wake-up protocol, while part of the setup, doesn't provide discriminatory evidence for Heads vs. Tails.

10. Epistemological Stance

  • Adheres to strict Bayesian principles for updating beliefs.
  • Rejects arguments that conflate distinct problems or misapply probabilistic concepts.

11. Common Thirder Arguments Addressed

  • Frequency of wake-ups: Irrelevant due to subjective indistinguishability.
  • Anthropic reasoning: Incorrectly combines distinct events.
  • Betting strategies: Don't necessarily reflect true probabilities.
  • Self-locating beliefs: Irrelevant to the coin flip probability.

12. Meta-level Considerations

  • Many arguments for the Thirder position stem from subtle misapplications of otherwise valid probabilistic principles.

13. Openness to Counter-Arguments

  • Willing to consider counter-arguments that adhere to rigorous Bayesian principles.
  • Rejects arguments based on frequentist interpretations, anthropic reasoning, or conflation of distinct problems.

This position maintains that the Sleeping Beauty problem, when correctly analyzed using Bayesian principles, does not provide any new information that would justify updating the prior 50/50 probability of the coin flip. It challenges readers to present counter-arguments that do not rely on commonly rejected reasoning patterns and that strictly adhere to Bayesian updating based on genuinely new, discriminatory evidence.


Closing Remarks

I am probably unjustified in my arrogance.

Some people who I strongly respect (e.g. Nick Bostrom) are apparently thirders.

This is IMO very strong evidence that I am actually just massively misunderstanding something or somehow mistaken here (especially as I have not yet engaged with Nick Bostrom's arguments as at the time of writing this post).

On priors I don't really expect to occupy an (on reflection endorsed) epistemic state where I think Nick Bostrom is making a basic epistemology mistake.

So I expect this is a position I can be easily convinced out of/I myself am misunderstanding something fundamental about the problem.


  1. I made some very light edits to the probability/odds treatment in point 7 to resolve factual inaccuracies. ↩︎

New Answer
New Comment

3 Answers sorted by

Gurkenglas

42

What is going to be done with these numbers? If Sleeping Beauty is to gamble her money, she should accept the same betting odds as a thirder. If she has to decide which coinflip result kills her, she should be ambivalent like a halfer.

I mean I think the "gamble her money" interpretation is just a different question. It doesn't feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.

The second question feels closer to actually an accurate interpretation of what probability means.

Rafael Harth

20

It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is .

I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about because in about of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.

There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is . This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we're talking logical uncertainty rather than probability, and this distinction is how you can answer in Sleeping Beauty without being forced to answer on the Presumptuous Philosopher problem.)

So RE this being the only "good" definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can't prove this. I'm not aware of any case where I want to define probability differently, anyway.

Charlie Steiner

20

The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.

This is, I think, the key thing that those smart people disagree with you about.

Suppose Alice and Bob are sitting in different rooms. Alice flips a coin and looks at it - it's Heads. What is the probability that the coin is Tails? Obviously, it's 0% right? That's just a fact about the coin. So I go to Bob in the other room and and ask Bob what's the probability the coin is Tails, and Bob tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the coin. Since it was already flipped the probability was already either 0% or 100%, and maybe if you didn't know which it was you should just say you can't assign a probability or something."

Now, suppose there are two universes that differ only by the polarization of a photon coming from a distant star, due to hit Earth in a few hours. And I go into the universe where that polarization is left-handed (rather than right-handed), and in that universe the probability that the photon is right-handed is 0% - it's just a fact about the photon. So I go to the copy of Carol that lives in this universe and ask Carol what's the probability the photon has right-handed polarization, and Carol tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the photon. Since it's already on its way the probability was already either 0% or 100%, and maybe if you don't know which it was you should just say you can't assign a probability or something."

Now, suppose there are two universes that differ outside of the room that Dave is currently in, but are the same within Dave's room. Say, in one universe all the stuff outside the room is arranged is it is today in our universe, while in the other universe all the stuff outside the room is arranged as it was ten years ago. And I go into the universe where all the stuff outside the room is arranged as it was ten years ago, which I will shorthand as it being 2014 (just a fact about calendars, memories, the positions of galaxies, etc.), and ask Dave what's the probability that the year outside is 2024, and Dave tells me it's 50%...

I mean I am not convinced by the claim that Bob is wrong.

Bob's prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.

I don't favour an objective notion of probabilities. From my OP:

2. Bayesian Reasoning

  • Probability is a property of the map (agent's beliefs), not the territory (environment).
  • For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
  • The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
  • The o
... (read more)