Preamble

Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. Once or twice, during the experiment, Sleeping Beauty will be awakened, interviewed, and put back to sleep with an amnesia-inducing drug that makes her forget that awakening. A fair coin will be tossed to determine which experimental procedure to undertake:

  • If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  • If the coin comes up tails, she will be awakened and interviewed on Monday and Tuesday.

In either case, she will be awakened on Wednesday without interview and the experiment ends.

Any time Sleeping Beauty is awakened and interviewed she will not be able to tell which day it is or whether she has been awakened before. During the interview Sleeping Beauty is asked: "What is your credence now for the proposition that the coin landed heads?"


Motivation

I was recently introduced to the canonical Sleeping Beauty problem and initially was a halfer but confused. Or more like I thought the halfer position was correct, but smart people seemed to be thirders and I was worried I was misunderstanding something about the problem, or confused myself or similar.

I debated the problem extensively on the LW Discord server and with some LLMs and strongly updated towards "thirders are just engaging in gross epistemic malpractice".

A message I sent in the LW server:

Alternatively I started out confused.

Debating this problem here and with LLMs convinced me that I'm not confused and the thirders are actually just doing epistemological nonsense.

It feels arrogant, but it's not a poor reflection of my epistemic state?

I still have some meta level uncertainty re: the nonsense allegations.

I want to be convinced that the thirder position is not nonsense and there is a legitimate disagreement/debate to be had here.

I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.

I have not yet read Joe Carlsmith's blog posts or Nick Bostrom's book as at posting this question. I'll probably be listening to them after posting the question.

I asked Sonnet 3.5 to distill my position/rejections from our debate and below is its summary[1]


Comprehensive Position on the Sleeping Beauty Problem

1. Core Position

  • The correct answer to the Sleeping Beauty problem is the Halfer position (1/2 probability for Heads).
  • The Thirder position (1/3 probability for Heads) is based on misapplications of both Bayesian and frequentist reasoning.

2. Bayesian Reasoning

  • Probability is a property of the map (agent's beliefs), not the territory (environment).
  • For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
  • The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
  • The original 50/50 probability should remain unchanged after waking up.

3. Frequentist Critique

  • The Thirder position often relies on a misapplication of frequentist probability.

Key Issues with Frequentist Approach:

  1. Misunderstanding Indistinguishable Events:

    • Thirders wrongly treat multiple indistinguishable wake-ups as distinct evidence.
    • Beauty's subjective experience is identical whether woken once or a million times.
  2. Conflating Processes with Outcomes:

    • Two mutually exclusive processes (Heads: one wake-up, Tails: multiple wake-ups) are incorrectly treated as a single sample space.
    • Multiple Tails wake-ups collapse into one indistinguishable experience.
  3. Misapplying Frequentist Logic:

    • Standard frequentist approach increases sample size with multiple observations.
    • This logic fails here as wake-ups are not independent data points.
  4. Ignoring Problem Structure:

    • Each experiment (coin flip + wake-ups) is one trial.
    • The coin's 50/50 probability remains unchanged regardless of wake-up protocol.

Counterargument to Thirder Position:

  • Thirder Claim: "Beauty would find herself in a Tails wake-up twice as often as a Heads wake-up."
  • Rebuttal: This incorrectly treats each wake-up as a separate trial, rather than considering the entire experiment as one trial.

4. Self-Locating Beliefs

  • Self-locating information (which wake-up you're experiencing) is irrelevant to the coin flip probability.
  • The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.

5. Anthropic Reasoning Rejection

  • Anthropic arguments that treat all possible wake-ups as equally likely samples are rejected.
  • This approach incorrectly combines outcomes from distinct events (coin flip and wake-up protocol).

Expanded Argument:

  • Anthropic reasoning in this context suggests that Beauty should consider herself as randomly selected from all possible wake-up events.
  • This reasoning is flawed because:
    1. It treats the wake-up events as the primary random process, when the actual random process is the coin flip.
    2. It conflates the sampling process (how Beauty is woken up) with the event we're trying to determine the probability of (the coin flip).

Specific Anthropic Argument and Counterargument:

  • Anthropic Argument: "When Beauty wakes up, she is essentially sampling from the space of all possible wake-ups. There are twice as many Tails wake-ups as Heads wake-ups, so the probability of Heads is 1/3."
  • Counterargument:
    1. This incorrectly assumes that each wake-up is an independent event, when they are actually dependent on a single coin flip.
    2. It ignores the fact that the probability we're interested in is that of the coin flip, not the wake-up event.
    3. This reasoning would lead to absurd conclusions if we changed the wake-up protocol (e.g., waking Beauty a million times for Tails would make Heads virtually impossible, which is clearly wrong).

6. Distinguishability vs. Probability

  • Subjective indistinguishability of events doesn't imply equal probability of the underlying states.
  • However, indistinguishability means the events can't provide evidence for updating probabilities.

7. Betting Strategies vs. Probabilities

  • Optimal betting strategies (e.g., always bet on Tails) don't necessarily reflect true probabilities.
  • Asymmetric payoffs can justify betting on Tails without changing the underlying 50/50 probability.

Expanded Argument:

  • The Sleeping Beauty problem presents a scenario where the optimal betting strategy (always betting on Tails) seems to contradict the claimed 50/50 probability. This apparent contradiction is resolved by recognizing that:
    1. Betting strategies can be influenced by factors other than pure probability, such as payoff structures.
    2. The expected value of a bet is not solely determined by the probability of an event, but also by the payoff for each outcome.
    3. In this case, the Tails outcome provides more opportunities to bet, creating an asymmetry in the payoff structure.

Specific Example:

  • Consider a simplified version of the problem where:
    • If the coin lands Heads, Beauty is woken once and can bet $1.
    • If the coin lands Tails, Beauty is woken twice and can bet $1 each time.
    • The payoff for a correct bet is 1:2 (you double your money).
  • The optimal strategy is to always bet on Tails, because:
    • Betting on Heads: 50% chance of winning $1, 50% chance of losing $1 = $0.5 - $0.5 = $0 expected value
    • Betting on Tails: 50% chance of winning $2 (betting twice) vs 50% chance of losing $1 = $1 - $0.5 = $0.5 expected value
  • However, this doesn't mean the probability of Tails is higher. It's still 50%, but the payoff structure makes betting on Tails more profitable.

Analogy to Clarify:

  • Imagine a fair coin flip where you're offered the following bet:
    • If you bet on Heads and win, you get $1.
    • If you bet on Tails and win, you get $K (where K >> 1, i.e., K is much larger than 1).
  • The optimal strategy is to bet on Tails every time, even though the coin is fair (50/50).
  • If you repeat this experiment many times, always betting on Tails will be a winning strategy in the long run.
  • Despite this, the probability of the coin landing Heads remains 0.5 (50%).

Counterargument to Thirder Position:

  • Thirders might argue: "The optimal betting strategy aligns with the 1/3 probability for Heads."
  • Rebuttal: This confuses expected value with probability. The betting strategy is optimal due to the asymmetric nature of the payoffs (betting twice on Tails vs. once on Heads), not because Tails is more likely. The underlying probability of the coin flip remains 50/50, regardless of the betting structure.

8. Counterfactuals and Different Problems

  • Arguments involving additional information change the problem fundamentally.
  • "X & Y is evidence for H, therefore X is evidence for H" is invalid reasoning.

9. Information Relevance

  • Not all information about the experimental setup is relevant for probability calculations.
  • The wake-up protocol, while part of the setup, doesn't provide discriminatory evidence for Heads vs. Tails.

10. Epistemological Stance

  • Adheres to strict Bayesian principles for updating beliefs.
  • Rejects arguments that conflate distinct problems or misapply probabilistic concepts.

11. Common Thirder Arguments Addressed

  • Frequency of wake-ups: Irrelevant due to subjective indistinguishability.
  • Anthropic reasoning: Incorrectly combines distinct events.
  • Betting strategies: Don't necessarily reflect true probabilities.
  • Self-locating beliefs: Irrelevant to the coin flip probability.

12. Meta-level Considerations

  • Many arguments for the Thirder position stem from subtle misapplications of otherwise valid probabilistic principles.

13. Openness to Counter-Arguments

  • Willing to consider counter-arguments that adhere to rigorous Bayesian principles.
  • Rejects arguments based on frequentist interpretations, anthropic reasoning, or conflation of distinct problems.

This position maintains that the Sleeping Beauty problem, when correctly analyzed using Bayesian principles, does not provide any new information that would justify updating the prior 50/50 probability of the coin flip. It challenges readers to present counter-arguments that do not rely on commonly rejected reasoning patterns and that strictly adhere to Bayesian updating based on genuinely new, discriminatory evidence.


Closing Remarks

I am probably unjustified in my arrogance.

Some people who I strongly respect (e.g. Nick Bostrom) are apparently thirders.

This is IMO very strong evidence that I am actually just massively misunderstanding something or somehow mistaken here (especially as I have not yet engaged with Nick Bostrom's arguments as at the time of writing this post).

On priors I don't really expect to occupy an (on reflection endorsed) epistemic state where I think Nick Bostrom is making a basic epistemology mistake.

So I expect this is a position I can be easily convinced out of/I myself am misunderstanding something fundamental about the problem.


  1. I made some very light edits to the probability/odds treatment in point 7 to resolve factual inaccuracies. ↩︎

New Answer
New Comment

11 Answers sorted by

Rafael Harth

194

It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is .

I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about because in about of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.

There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is . This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we're talking logical uncertainty rather than probability, and this distinction is how you can answer in Sleeping Beauty without being forced to answer on the Presumptuous Philosopher problem.)

So RE this being the only "good" definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can't prove this. I'm not aware of any case where I want to define probability differently, anyway.

So in this case, I agree that like if this experiment is repeated multiple times and every Sleeping Beauty version created answered tails, the reference class of Sleeping Beauty agents would have many more correct answers than if the experiment is repeated many times and every sleeping Beauty created answered heads.

I think there's something tangible here and I should reflect on it.

I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.

And when I was thinking through the question before it was always about trying to answer a question regarding the actual outcome of the coin flip and not what strategy maximises monetary payoffs under even bets.

While I do think that like betting odds isn't convincing re: actual probabilities because you can just have asymmetric payoffs on equally probable mutually exclusive and jointly exhaustive events, the "reference class of agents being asked this question" seems like a more robust rebuttal.

I want to take some time to think on this.


Strong up voted because this argument actually/genuinely makes me think I might be wrong here.

Much less confident now, and mostly confused.

I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.

Importantly, this is counting each coinflip as the "experiment", whereas the above counts each awakening as the "experiment". It's okay that different experiments would see different outcome frequencies.

2Viliam
Yes. If you record the moments when the outside observer sees the coin landing, you will get 1/2. If you record the moments when the Sleeping Beauty, right after making her bet, is told the actual outcome, you will get 1/3. So we get 1/2 by identifying with the outside observer, but he is not the one who was asked in this experiment. Unless you change the rules so that the Sleeping Beauty is only rewarded for the correct bet at the end of the week, and will only get one reward even if she made two (presumably identical) bets. In that case, recording the moment when the Sleeping Beauty gets the reward or not, you will again get 1/2.
7Rafael Harth
What I'd say is that this corresponds to the question, "someone tells you they're running the Sleeping Beauty experiment and just flipped a coin; what's the probability that it's heads?". Difference reference class, different distribution; probability now is 0.5. But this is different from the original question, where we are Sleeping Beauty.
2DragonGod
My current position now is basically:

I'm curious how your conception of probability accounts for logical uncertainty?

3Rafael Harth
I count references within each logical possibility and then multiply by their "probability". Here's a super contrived example to explain this. Suppose that if the last digit of pi is between 0 and 3, Sleeping Beauty experiments work as we know them, whereas if it's between 4 and 9, everyone in the universe is miraculously compelled to interview Sleeping Beauty 100 times if the coin is tails. In this case, I think P(coin heads|interviewed) is 0.4⋅13+0.6⋅1101. So it doesn't matter how many more instances of the reference class there are in one logical possibility; they don't get "outside" their branch of the calculation. So in particular, the presumptuous philosopher problem doesn't care about number of classes at all. In practice, it seems super hard to find genuine examples of logical uncertainty and almost everything is repeated anyway. I think the presumptuous philosopher problem is so unintuitive precisely because it's a rare case of actual logical uncertainty where you genuinely cannot count classes.

I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of  should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). 

Why do you suddenly substitute the notion of ... (read more)

4Rafael Harth
Just to be clear, the reference class here is the set of all instances across all of space and time where an agent is in the same "situation" as you (where the thing you can argue about is how precisely one has to specify the situation). So in the case of the coinflip, it's all instances across space and time where you flip a physical coin (plus, if you want to specify further, any number of other details about the current situation). So with that said, to answer your question: why define probabilities in terms of this concept? Because I don't think I want a definition of probability that doesn't align with this view, when it's applicable. If we can discretely count the number of instances across the history of the universe that fit the current situation , and we know some event happens in one third of those instances, then I think the probability has to be one third. This seems very self-evident to me; it seems exactly what the concept of probability is supposed to do. I guess one analogy -- suppose one third of all houses is painted blue from the outside and one third red, and you're in one house but have no idea which one. What's the probability that it's blue? I think it's 2/3, and I think this situation is precisely analogous to the reference class construction. Like I actually think there is no relevant difference; you're in one of the situations that fit the current situation (trivially so), and you can't tell which one (by construction; if you could, that would be included in the definition of the reference class, which would make it different from the others). Again, this just seems to get at precisely the core of what a probability should do. So I think that answers it? Like I said, I think you can define "probability" differently, but if the probability doesn't align with reference class counting, then it seems to me that the point of the concept has been lost. (And if you do agree with that, the question is just whether or not reference class counting
3Ape in the coat
Suppose I want matrix multiplication to be commutative. Surely it would be so convinient if it was! I can define some operator * over matrixes so that A*B = B*A. I can even call this operator "matrix multiplication".  But did I just make matrix multiplication, as it's conventionally defined, commutative? Of course not. I logically pinpointed a new function and called it the same way as the previous function is being called, but it didn't change anything about how the previous function is logically pinpointed. My new function may have some interesting applications and therefore deserve to be talked about in its own right. But calling it's "matrix multiplication" is very misleading. And if I were to participate in conversation about matrix multiplication while talking about my function I'd be confusing everyone.  This is basically the situation that we have here. Initially probability function is defined over iterations of probability experiment. You define a different function over all space and time, which you still call "probability". It surely has properties that you like, but it's a different function! Please use another name, this is already taken. Or add a disclaimer. Preferably do both. You know how easy it is to confuse people with such things! Definetely, do not start participating in the conversations about probability while talking about your function. As long as these instances are independent of each other - sure. Like with your houses analogy. When we are dealing with simple, central cases there is no diasagreement between probability and weighted probability and so nothing to argue about.  But as soon as we are dealing with more complicated scenario where there is no independence and it's possible to be inside multiple houses in the same instance... Surely, you see how demanding to have coherent P(Red xor Blue) becomes unfeasible? The problem is, our intuitions are too eager to assume that everything as independent. We are used to think in terms
2ProgramCrafter
Upon rereading your posts, I retract disagreement on "mutually exclusive outcomes". Instead... An obvious way to do so is put a hazard sign on "probability" and just not use it, not putting resources into figuring out what "probability" SB should name, isn't it? For instance, suppose Sleeping Beauty claims "my credence for Tails is 1π"; any specific objection would be based on what you expected to hear. (And now I realize a possible point why you're arguing to keep "probability" term for such scenarios well-defined; so that people in ~anthropic settings can tell you their probability estimates and you, being observer, could update on that information.) As for why I believe probability theory to be useful in life despite the fact that sometimes different tools need to be used: I believe disappearing as a Boltzmann brain or simulated person is balanced out by appearing the same way, dissolving into different quantum branches is balanced out by branches reassembling, and likewise for most processes.
3Ape in the coat
It's an obvious thing to do when dealing with simularity clusters poorly defined in natural language. Not so much, when we are talking about a logically pinpointed mathematical concept which we know are crucial for epistemology. It's not just about anthropic scenarios and not just about me being able to understand other people. It's about general truth preserving mechanism of logical and mathematical reasoning. When people just use different definitions - this is annoying but fine. But when they use different definitions without realizing that these definitions are different and, moreover insist that it's you who is making a mistake - then we have an actual disagreement about math which will provide more confusion along the way. Anthropic scenarious are just the ones where this confusion is noticeable. What exactly do you mean by "different tools need to be used"? Can you give me an example?
1ProgramCrafter
I mean that Beauty should maintain full model of experiment, and use decision theory as well as probability theory (if latter is even useful, which it admittedly seems to be). If she didn't keep track of full setup but only "a fair coin was flipped, so the odds are 1:1", she would predictably lose when betting on the coin outcome.   Also, I've minted another "paradox" version. I can predict you'll take issue with one of formulations in it, but what do you think about it?
3Ape in the coat
I suppose the participant is just supposed to lie about their credence here in order to "win".  On Tuesday your credence in Heads supposed to be 0, but saying the true value would go against the experimental protocol unless you also said that your credence is 0 on Monday, which would also be a lie.
3Radford Neal
I don't understand this formulation. If Beauty always says that the probability of Heads is 1/7, does she win? Whatever "win" means...
0ProgramCrafter
She certainly gets a reward for following experimental protocol, but beyond that... I concur there's the problem, and I have the same issue with standard formulation asking for probability. In particular, pushing problem out to morality "what should Sleeping Beauty answer so that she doesn't feel as if she's lying" doesn't solve anything either; rather, it feels like asking question "is continuum hypothesis true?" providing only options 'true' and 'false', while it's actually independent of ZFC axioms (claims of it or of its negation produce different models, neither proven to self-contradict). P.S. One more analogue: there's a field, and some people (experimenters) are asking whether it rained recently with clear intent to walk through if it didn't; you know it didn't rain but there are mines all over the field. I argue you should mention the mines first ("that probability - which by the way will be 1/2 - can be found out, conforms to epistemology, but isn't directly usable anywhere") before saying if there was rain.
2Rafael Harth
If you can demonstrate how, in the reference class setting, there is a relevant criterion by which several instances should be grouped together, then I think you could have an argument. If you look at space-time from above, there's two blue houses for every red house. Sorry I meant there's two SB(=Sleeping Beauty)-tails instances for every SB-heads instance. The two instances you want to group together (tails-Monday & tails-Tuesday) aren't actually at the same time (not that I think it matters). If the universe is very large of Many Worlds is true, then there are in fact many instances of Monday-heads, Monday-tails, and Tuesday tails occurring at the same time, and I don't think you want to group those together. In any case, from the PoV of SB, all instances look identical to you. So by what criterion should we group some of them together? That's the thing I think your position requires (just because you accept reference classes are a priori valid and then become invalid in some cases), and I don't see the criterion.

Gurkenglas

185

What is going to be done with these numbers? If Sleeping Beauty is to gamble her money, she should accept the same betting odds as a thirder. If she has to decide which coinflip result kills her, she should be ambivalent like a halfer.

[-]Dana32

Halfer makes sense if you pre-commit to a single answer before the coin-flip, but not if you are making the decisions independently after each wake-up event. If you say heads, you have a 50% chance of surviving when asked on Monday, and a 0% chance of surviving when asked on Tuesday. If you say tails, you have a 50% chance of surviving Monday and a 100% chance of surviving Tuesday.

4Gurkenglas
If you say heads every time, half of all futures contain you; likewise with tails.
3Dana
I've updated my comment. You are correct as long as you pre-commit to a single answer beforehand, not if you are making the decision after waking up. The only reason pre-committing to heads works, though, is because it completely removes the Tuesday interview from the experiment. She will no longer be awoken on Tuesday, even if the result is tails. So, this doesn't really seem to be in the spirit of the experiment in my opinion. I suppose the same pre-commit logic holds if you say the correct response gets (1/coin-side-wake-up-count) * value per response though.

Betting argument are tangential here.

https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets

The disagreement is how to factorise expected utility function into probability and utility, not which bets to make. This disagreement is still tangible, because the way you define your functions have meaningfull consequences for your mathematical reasoning.

I mean I think the "gamble her money" interpretation is just a different question. It doesn't feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.

The second question feels closer to actually an accurate interpretation of what probability means.

7Gurkenglas
https://www.lesswrong.com/posts/Mc6QcrsbH5NRXbCRX/dissolving-the-question
1Ape in the coat
Probability is not some vaguely defined similarity cluster like "sound". It's a mathematical function that has specific properties. Not all of them are solely about betting. We can dissolve the semantic disagreement between halfers and thirders and figure out that they are talking about two different functions p and p' with subtly different properties while producing the same betting odds.  This in itself, however, doesn't resolve the actual question: which of these functions fits the strict mathematical notion of probability for the Sleeping Beauty experiment and which doesn't. This question has an answer.

Dana

71

I would frame the question as "What is the probability that you are in heads-space?", not "What is the probability of heads?". The probability of heads is 1/2, but the probability that I am in heads-space, given I've just experiences a wake-up event, is 1/3.

The wake-up event is only equally likely on Monday. On Tuesday, the wake-up event is 0%/100%. We don't know whether it is Tuesday or not, but we know there is some chance of it being Tuesday, because 1/3 of wake-up events happen on Tuesday, and we've just experienced a wake-up event:

P(Monday|wake-up) = 2/3
P(Tuesday|wake-up) = 1/3
P(Heads|Tuesday) = 0/1
P(Heads|Monday) = 1/2
P(Heads|wake-up) = P(Heads|Monday) * P(Monday|wake-up) + P(Heads|Tuesday) * P(Tuesday|wake-up) = 1/3

samshap

74

Thirder here (with acknowledgement that the real answer is to taboo 'probability' and figure out why we actually care)

The subjective indistinguishability of the two Tails wakeups is not a counterargument  - it's part of the basic premise of the problem. If the two wakeups were distinguishable, being a halfer would be the right answer (for the first wakeup).

Your simplified example/analogies really depend on that fact of distinguishability. Since you didn't specify whether or not you have it in your examples, it would change the payoff structure.

I'll also note you are being a little loose with your notion of 'payoff'. You are calculating the payoff for the entire experiment, whereas I define the 'payoff' as being the odds being offered at each wakeup. (since there's no rule saying that Beauty has to bet the same each time!)

To be concise, here's my overall rationale:

Upon each (indistinguishable) wakeup, you are given the following offer:

  • If you bet H and win, you get  dollars.
  • If you bet T and win, you get 1+ dollars.

If you believe T yields a higher EV, then you have a credence 

You get a positive EV for all N up to 2, so . Thus you should be a thirder.

Here's a clarifying example where this interpretation becomes more useful than yours:

The experimenter flips a second coin. If the second coin is Heads (H2), then N= 1.50 on Monday and 2.50 on Tuesday. If the second coin is Tails, then the order is reversed.

I'll maximize my EV if I bet T when , and H when . Both of these fall cleanly out of 'thirder' logic.

What's the 'halfer' story here? Your earlier logic doesn't allow for separate bets on each awakening.

Charlie Steiner

52

The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.

This is, I think, the key thing that those smart people disagree with you about.

Suppose Alice and Bob are sitting in different rooms. Alice flips a coin and looks at it - it's Heads. What is the probability that the coin is Tails? Obviously, it's 0% right? That's just a fact about the coin. So I go to Bob in the other room and and ask Bob what's the probability the coin is Tails, and Bob tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the coin. Since it was already flipped the probability was already either 0% or 100%, and maybe if you didn't know which it was you should just say you can't assign a probability or something."

Now, suppose there are two universes that differ only by the polarization of a photon coming from a distant star, due to hit Earth in a few hours. And I go into the universe where that polarization is left-handed (rather than right-handed), and in that universe the probability that the photon is right-handed is 0% - it's just a fact about the photon. So I go to the copy of Carol that lives in this universe and ask Carol what's the probability the photon has right-handed polarization, and Carol tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the photon. Since it's already on its way the probability was already either 0% or 100%, and maybe if you don't know which it was you should just say you can't assign a probability or something."

Now, suppose there are two universes that differ outside of the room that Dave is currently in, but are the same within Dave's room. Say, in one universe all the stuff outside the room is arranged is it is today in our universe, while in the other universe all the stuff outside the room is arranged as it was ten years ago. And I go into the universe where all the stuff outside the room is arranged as it was ten years ago, which I will shorthand as it being 2014 (just a fact about calendars, memories, the positions of galaxies, etc.), and ask Dave what's the probability that the year outside is 2024, and Dave tells me it's 50%...

I mean I am not convinced by the claim that Bob is wrong.

Bob's prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.

I don't favour an objective notion of probabilities. From my OP:

2. Bayesian Reasoning

  • Probability is a property of the map (agent's beliefs), not the territory (environment).
  • For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
  • The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
  • The o
... (read more)
5Charlie Steiner
Yes, Bob is right. Because the probability is not a property of the coin. It's 'about' the coin in a sense, but it also depends on Bob's knowledge, including knowledge about location in time (Dave) or possible worlds (Carol).

Radford Neal

42

You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic - it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with "Suppose that Sleeping Beauty is a computer program..." or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.

Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says "It depends on how you define probabilities", or "It depends on what reference class you use", or "Probabilities can't be assigned in this problem" is just dodging the question. In real life, you can't just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don't have the option of not acting (since no action is itself an action).

Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework - we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.

Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant. 

So, now the answer... It's a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1/2 (ie, 1-to-1 odds), since it's a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.

The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1/2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1/3. 

(Not all of Beauty's experiences will be independent between awakenings - eg, the colour of the wallpaper may be the same - but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)

The 1/3 answer works. Other answers, such as 1/2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.

For example, suppose that after wakening, Beauty says that her probability of Heads is 1/2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1/2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2/3 probability of Heads.

But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1/2, since it's a fair coin. Note that if Beauty had instead thought the probability of Heads was 1/3 before seeing the phone, she would correctly update to a probability of 1/2.

Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1/2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.

We also see that 1/2 does not work in betting scenarios. I'll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?

If Beauty thinks that the probability of Heads is 1/2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn't take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = -4. In contrast, if Beauty thinks the probability of Heads is 1/3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666... and not take it.

Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the "relevant" information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1/3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1/2 for Heads.

She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn't a good argument for that incorrect probability.

Anders Lindström

30

If the experiment instead was constructed such that:

  1. If the coin comes up heads, Sleeping Beauty will be awakened and interviewed on Monday only.
  2. If the coin comes up tails, Sleeping Beauty's twin sister will be awakened and interviewed on Monday and Sleeping Beauty will be awakened and interviewed on Tuesday.

In this case it is "obvious" that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs. 

Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?

If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic?  And, of course, the second sister will give 100% odds to it being Monday.  

Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of i... (read more)

1Anders Lindström
Maybe I was a bit vague. I was trying to say that waking up SB's twin sister on monday was a way of saying that SB's would be equally aware of that as if her self would be awakened on monday under the conditions stipulated in the original experiment, i.e. zero recollection of the event. Or the other way around SB is awakened on monday but her twin siter on Tuesday. SB will not be aware of that here twin sister will be awakened on Tuesday.  For that reason she is only awakened ONE time no matter if it is heads or tails.  She will only experience ONE awakening per path. The is no cumulative effect of her being awakened 2 or a million times, every time is the "first" time and the "last" time". If she is awake its equal chance that it is day 1 on the heads path as it would be day 56670395873966 (or any other day) on the tails path as far as she knows. Or like this. Imagine that I flip a coin that I can see but you can not. I give you the rule that if it is heads I show you a picture of a dog. If it is tails, I show you  the same picture of a dog but I might have shown this picture to thousands of people before you and maybe thousands of people after you, which you have no information about. You might be the first one to see it but you might also be the last one to see it or somewhere in the middle, i.e. you are not aware of the other observers. When I show you the picture of the dog, what chance do you give that the coin flip was heads? But I am curious to know how a person with a thirder position argues in the case that she is awakened 999 or 8490584095805 times on the tails path, what probability should SB give heads in that case?

Zane

21

If you look over all possible worlds, then asking "did the coin come up Heads or Tails" as if there's only one answer is incoherent. If you look over all possible worlds, there's a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.

But from the perspective of a particular observer, the question they're trying to answer is a question of indexical uncertainty - out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-worlds? It's true that there are equally as many Heads-worlds as Tails-worlds - but 2/3 of observers are in the latter worlds.

Or to put it another way - suppose you put 10 people in one house, and 20 people in another house. A given person should estimate a 1/3 chance that they're in the first house - and the fact that 1 house is half of 2 houses is completely irrelevant. Why should this reasoning be any different just because we're talking about possible universes rather than houses?

Ben Livengood

10

"What is your credence now for the proposition that the coin landed heads?"

There are three doors. Two are labeled Monday, and one is labeled Tuesday. Behind each door is a Sleeping Beauty. In a waiting room, many (finite) more Beauties are waiting; every time a Beauty is anesthetized, a coin is flipped and taped to their forehead with clear tape. You open all three doors, the Beauties wake up, and you ask the three Beauties The Question. Then they are anesthetized, the doors are shut, and any Beauties with a Heads showing on their foreheads or behind a Tuesday door are wheeled away after the coin is removed from their forehead. The Beauty with a Tails on their forehead behind the Monday door is wheeled behind the Tuesday door. Two new Beauties are wheeled behind the two Monday doors, one with Heads and one with Tails. The experiment repeats.

You observe that Tuesday Beauties always have a Tails taped to their forehead. You always observe that one Monday Beauty has a Tails showing, and one has a Heads showing. You also observe that every Beauty says 1/3, matching the ratio of Heads to Tails showing, and it is apparent that they can't see the coins taped to their own or each other's foreheads or the door they are behind. Every Tails Beauty is questioned twice. Every Heads Beauty is questioned once. You can see all the steps as they happen, there is no trick, every coin flip has 1/2 probability for Heads.

There is eventually a queue of Waiting Sleeping Beauties with all-Heads or all-Tails showing and a new Beauty must be anesthetized with a new coin; the queue length changes over time and sometimes switches face. You can stop the experiment when the queue is empty, as a random walk guarantees to happen eventually, if you like tying up loose ends.

Tao Lin

10

I prefer to just think about utility, rather than probabilities. Then you can have 2 different "incentivized sleeping beauty problems"

  • Each time you are awakened, you bet on the coin toss, with $ payout. You get to spend this money on that day or save it for later or whatever
  • At the end of the experiment, you are paid money equal to what you would have made betting on your average probability you said when awoken.

In the first case, 1/3 maximizes your money, in the second case 1/2 maximizes it.

To me this implies that in real world analogues to the Sleeping Beauty problem, you need to ask whether your reward is per-awakening or per-world, and answer accordingly

That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie.

EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.

Ape in the coat

10

Alternatively I started out confused.

Debating this problem here and with LLMs convinced me that I'm not confused and the thirders are actually just doing epistemological nonsense.

It feels arrogant, but it's not a poor reflection of my epistemic state?

Welcome to the club.

I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.

I suppose my posts are among the ones that you are talking about here?

[-]robo20

Hijacking this thread, has anybody worked through Ape in the coat's anthropic posts and understood / gotten stuff out of them?  It's something I might want to do sometime in my copious free time but haven't worked up to it yet.

3weightt an
I propose to sic o1 on them to distill it all into something readable/concise. (I tried to comprehend it and failed / got distracted). I think some people pointed out in comments that their model doesn't represent prob of "what day it is NOW" btw
1Ape in the coat
I'm actually talking about it in the post here. But yes this is additionally explored in the comments pretty well. Here is the core part that allows to understand why "Today" is ill-defined from the perspective of the Beauty:  
3weightt an
Let's say there is an accurate mechanical calendar in the closed box in the room. She can open it but wouldn't. Should she have no expectation about like in what state this calendar is? 
2Ape in the coat
What state the calendar is when?  On Monday it's Monday. On Tuesday it's Tuesday. And "Today" is ill-defined, there is no coherent state for it.
1weightt an
Well, now! She looks at the box and thinks there is definitely a calendar in some state. What state? What would happen if i open it?
2Ape in the coat
Please specify this "now" thingy you are talking about, using formal logic. If this is a meaningful event for the setting, surely there wouldn't be any problems. Are you talking about Monday xor Tuesday? Monday or Tuesday? Monday and Tuesday? Something else?
1weightt an
Well, idk. My opinion here is that you bite some weird bullet, which I'm very ambivalent to. I think "now" question makes total sense and you factor it out into some separate parts from your model.  Like, can you add to the sleeping beauty some additional decision problems including the calendar? Will it work seamlessly?
2Ape in the coat
The counter-intuitiveness comes from us not being accustomed to reasoning under amnesia and repetition of the same experience. It's understandable that initially we would think that question about "now"/"today" makes sense as we are used to situation where it indeed does. But then we can clearly see that in such situations there is no problem with formally defining what event we mean by it. Contrary to SB, where such event is ill-defined.  Oh absolutely.  Suppose that on every awakening the Beauty is proposed to bet that "Today is Monday" What odds is she supposed to take? "Today is Monday" is ill-defined, but she can construct a corresponding betting scheme using events "Monday awakening happens" and "Tuesday awakening happens" like this: E(Monday) = P(Monday)U(Monday) - P(Tuesday)U(Tuesday) P(Monday) = 1; P(Tuesday) = 1/2, therefore E(Monday) = U(Monday) - 1/2U(Tuesday) solving E(Monday)=0 for U(Monday): U(Monday) = 1/2U(Tuesday) Which means 2:1 betting odds As you see everything is quite seamless.
1weightt an
So, she shakes the box contemplatively. There is mechanical calendar. She knows the betting odds of it displaying "Monday" but not the credence. She thinks it's really really weird
2Ape in the coat
I'm very available to answer questions about my posts as soon as people actuall engage with the reasoning, so feel free to ask if you feel confused about anything.  If I am to highlight the core principle it would be: Thinking in terms of what happens in the probability experiment as a whole, to the best of your knowledge and from your perspective as a participant.  Suppose this experiment happened to you multiple times. If on iteration of the experiment something happens 2/3 of times then the probability of such event is 2/3. If something happens 100% of times then its probability is 1 and realizationof such event doesn't give you you any evidence.  All the rest is commentary. 

I have not read all of them!

15 comments, sorted by Click to highlight new comments since:
[-]TsviBT101
  • Rebuttal: This confuses expected value with probability. The betting strategy is optimal due to the asymmetric nature of the payoffs (betting twice on Tails vs. once on Heads), not because Tails is more likely. The underlying probability of the coin flip remains 50/50, regardless of the betting structure.

 



(This is not a rhetorical question:) What do you mean by "probability" here? A common way of arguing for "having probabilities" is that it's how you make consistent bets -- bets that aren't obviously leaving utility on the table (e.g. Dutch bookable). But you're dismissing arguments of the form [I want to bet like this] -> [therefore my probabilities should be such and such]. 

I would think that what we're learning is that there's some sort of equivalence principle or something, where it becomes hard to disentangle [I care about my actions in this information-set twice as much] from the allegedly more narrow [This information-set is "truly twice as likely"]. See probutilities

An answer might be "The world happens to be the case that there pretty strongly tends to be a bunch of stuff that's external to you, which isn't correlated with the size of your information-sets (i.e. how many instances of you there are who you can't distinguish yourself from). That stuff is what we call "reality" and what we have "probabilities" about.". But that doesn't seem like a very fundamental notion, and would break down in some cases [citation needed]. 

This is not a rhetorical question:) What do you mean by "probability" here?

Yeah, since posting this question:

I have updated towards thinking that it's in a sense not obvious/not clear what exactly "probability" is supposed to be interpreted as here.

And once you pin down an unambiguous interpretation of probability the problem dissolves.

I had a firm notion in mind for what I thought probability meant. But Rafael Harth's answer really made me unconfident that the notion I had in mind was the right notion of probability for the question.

I think the question is underdefined. Some bets are posed once per instance of you, some bets are posed once per instance of a world (whatever that means), etc.

I have read and participated in many of these debates, and it continually frustrates me that people use the word "probability" AS IF it were objective and a property of the territory, when your bayesean tenet, "Probability is a property of the map (agent's beliefs), not the territory (environment)" is binding in every case I can think of.  I'm actually agnostic on whether some aspects of the universe are truly unknowable by any agent in the universe, and even more so on whether that means "randomness is inherent" or "randomness is a modeling tool".   Yes, this means I'm agnostic on MWI vs Copenhagen, as I can't define "true" on that level (though I generally use MWI for reasoning, as I find it easier.  That framing helps me remember that it's a modelling choice, not a fact about the universe(s).  

In practice, probability is a modeling and prediction tool, and works pretty much the same for all kinds of uncertainty: contingent (which logically-allowed way does this universe behave), indexical (which set of possible experiences in this universe am I having) and logical (things that must be so but I don't know which way).  There are probably edge cases where the difference between these matter, but I don't know of any that I expect to be resolved by foreseeable humans or our creations.

My pretty strong belief is that 1/2 is easier to explain and work with - the coin is fair and Beauty has no new information.  And that 1/3 is justified if you are predicting "weight" of experience, and the fact that tails will be experienced twice as often.  But mostly I'm rather sure that anyone who believes that their preference is the right model is in the wrong (on that part of the question).  

They're "doing epistemology wrong" no more than you.  Thinking either choice is best is justified.  Thinking the other choice is wrong is itself wrong.

So how do you actually use probability to make decisions? There's a well-established decision theory that takes probabilities as inputs, and produces a decision in some situation (eg, a bet). It will (often) produce different decisions when given 1/2 versus 1/3 as the probability of Heads. Which of these two decisions should you act on?

So how do you actually use probability to make decisions?

I think about what model fits the needs, roughly multiply payouts by probability estimates, then do whatever feels right in the moment.

I’m not sure that resolves any of these questions, since choice of model for different purposes is the main crux.

But the whole point of using probability to express uncertainty about the world is that the probabilities do not depend on the purpose. 

If there are N possible observations, and M binary choices that you need to make, then a direct strategy for how to respond to an observation requires a table of size NxM, giving the actions to take for each possible observation. And you somehow have to learn this table.

In contrast, if the M choices all depend on one binary state of the world, you just need to have a table of probabilities of that state for each of the N observations, and a table of the utilities for the four action/state combinations for the M decisions - which have size proportional to N+M, much smaller than NxM for large N and M. You only need to learn the N probabilities (perhaps the utilities are givens).

And in reality, trying to make decisions without probabilities is even worse than it seems from this, since the set of decisions you may need to make is indefinitely large, and the number of possible observations is enormous. But avoiding having to make decisions by a direct observation->action table requires that probabilities have meaning independent of what decision you're considering at the moment. You can't just say that it could be 1/2, or could be 1/3...

probabilities do not depend on the purpose. 

I think this is a restatement of the crux.  OF COURSE the model chosen depends on the purpose of the model.  For probabilities, the choice of reference class for a given prediction/measurement is key.  For Sleeping Beauty specifically, the choice of whether an experientially-irrelevant wakening (which is immediately erased and has no impact) is distinct from another is a modeling choice.

Either choice for probability modeling can answer either wagering question, simply by applying the weights to the payoffs if it's not already part of the probability 

Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following possibility:

Upon wakening, Beauty see that there is a plate of fresh muffins beside her bed. She recognizes them as coming from a nearby cafe. She knows that they are quite delicious. She also knows that, unfortunately, the person who makes them on Mondays puts in an ingredient that she is allergic to, which causes a bad tummy ache. Muffins made on Tuesday taste the same, but don't cause a tummy ache. She needs to decide whether to eat a muffin, weighing the pleasure of their taste against the possibility of a subsequent tummy ache.

If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4, whereas if she thinks the probability of Heads is 1/3, she will think the probability that it is Monday is (1/3)+(1/2)*(2/3)=2/3. Since 3/4 is not equal to 2/3, she may come to a different decision about whether to eat a muffin if she thinks the probability of Heads is 1/2 than if she thinks it is 1/3 (depending on how she weighs the pleasure versus the pain). Her decision should not depend on some arbitrary "reference class", or on what bets she happens to be deciding whether to make at the same time. She needs a real probability. And on various grounds, that probability is 1/3.

Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.

Completely agree. The general applicable method is:

  1. Understand what probability experiment is going on, based on the description of the problem.
  2. Construct the sample space from mutually exclusive outcomes of this experiment
  3. Construct the event space based on the sample space, such that it was minimal and sufficient to capture all the events that the participant of the experiment can observe
  4. Define probability as a measure function over the event space, such that:
  • The sum of probabilities of events consisting of only individual mutually exclusive and collectively exshaustive outcomes was equal to 1 and
  • if an event has probability 1/a then this event happens on average N/a times on a repetition of probability experiment N times for any large N.

Naturally, this produce answer 1/2 for the Sleeping Beauty problem.

If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4

This is a description of Lewisian Halfism reasoning, that in incorrect for the Sleeping Beauty problem

I describe the way the Beauty is actually supposed to reason about betting scheme on a particular day here

She needs a real probability.

Indeed. And real probability domain of function is event space, consisting of properly defined events for the probability experiment. "Today is Monday" is ill-defined in the Sleeping Beauty setting. Therefore it can't have probability.

[ bowing out after this - I'll read responses and perhaps update on them, but probably won't respond (until next time) ]
 

To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose
 

I disagree.  Very specifically, it's 1/2 if your reference class is "fair coin flips" and 1/3 if your reference class is "temporary, to-be-erased experience of victims with adversarial memory problems".  

If your reference  class is "wakenings who are predicting what day it is", as the muffin variety, then 1/3 is a bit easier to work with (though you'd need to specify payoffs to explain why she'd EVER eat the muffin, and then 1/2 becomes pretty easy too).  This is roughly equivalent to the non-memory-wiping wager: I'll flip a fair coin, you predict heads or tails.  If it's heads, the wager will be $1, if it's tails, the wager is $2.  The probability of tails is not 2/3, but you'd pay up to $0.50 to play, right?

OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might also make the right decision if you do both of these things incorrectly (your mistakes might cancel out), but that's not a reliable method. And you might also make the right decision by just intuiting what it is. That's fine if you happen to have good intuition, but since we often don't, we have probability theory and decision theory to help us out.

One of the big ways probability and decision theory help is by separating the estimation of probabilities from their use to make decisions. We can use the same probabilities for many decisions, and indeed we can think about probabilities before we have any decision to make that they will be useful for. But if you entirely decouple probability from decision-making, then there is no longer any basis for saying that one probability is right and another is wrong - the exercise becomes pointless. The meaningful justification for a probability assignment is that it gives the right answer to all decision problems when decision theory is correctly applied. 

As your example illustrates, correct application of decision theory does not always lead to you betting at odds that are naively obtained from probabilities. For the Sleeping Beauty problem, correctly applying decision theory leads to the right decisions in all betting scenarios when Beauty thinks the probability of Heads is 1/3, but not when she thinks it is 1/2.

[ Note that, as I explain in my top-level answer in this post, Beauty is an actual person. Actual people do not have identical experiences on different days, regardless of whether their memory has been erased. I suspect that the contrary assumption is lurking in the background of your thinking that somehow a "reference class" is of relevance. ]

I have a prediction market for this. There are papers in the description, which I review in the comments.

PLEASE tell me there's a version that asks "is the answer 1/2", and that it currently has a price of 33%!

If the SB always guesses heads, she'll be correct  of the time. For that reason, that is her credence.