It ultimately depends on how you define probabilities, and it is possible to define them such that the answer is .
I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence). So when you flip a coin, the experiment is not the mathematical coin with two equally likely outcomes, but the situation where you as an agent are flipping a physical coin, which may include a 0.01% probability of landing on the side, or a probability of breaking in two halfs mid air or whatever. But the probability for it coming up heads should be about because in about of cases where you as an agent are about to flip a physical coin, you subsequently observe it coming up heads.
There are difficulties here with defining the reference class, but I think they can be adequately addressed, and anyway, those don't matter for the sleeping beauty experiment because there, the reference classes is actually really straight-forward. Among the times that you as an agent are participating in the experiment and are woken up and interviewed (and are called Sleeping Beauty, if you want to include this in the reference class), one third will have the coin heads, so the probability is . This is true regardless of whether the experiment is run repeatedly throughout history, or repeatedly because of Many Worlds, or an infinite universe, etc. (And I think the very few cases in which there is genuinely not a repeated experiment are in fact qualitatively difference since now we're talking logical uncertainty rather than probability, and this distinction is how you can answer in Sleeping Beauty without being forced to answer on the Presumptuous Philosopher problem.)
So RE this being the only "good" definition, well one thing is that it fits betting odds, but I also suspect that most smart people would eventually converge on an interpretation with these properties if they thought long enough about the nature of probability and implications of having a different definition, though obviously I can't prove this. I'm not aware of any case where I want to define probability differently, anyway.
So in this case, I agree that like if this experiment is repeated multiple times and every Sleeping Beauty version created answered tails, the reference class of Sleeping Beauty agents would have many more correct answers than if the experiment is repeated many times and every sleeping Beauty created answered heads.
I think there's something tangible here and I should reflect on it.
I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.
And when I was thinking through the question before it was always about trying to answer a question regarding the actual outcome of the coin flip and not what strategy maximises monetary payoffs under even bets.
While I do think that like betting odds isn't convincing re: actual probabilities because you can just have asymmetric payoffs on equally probable mutually exclusive and jointly exhaustive events, the "reference class of agents being asked this question" seems like a more robust rebuttal.
I want to take some time to think on this.
Strong up voted because this argument actually/genuinely makes me think I might be wrong here.
Much less confident now, and mostly confused.
I separately think though that if the actual outcome of each coin flip was recorded, there would be a roughly equal distribution between heads and tails.
Importantly, this is counting each coinflip as the "experiment", whereas the above counts each awakening as the "experiment". It's okay that different experiments would see different outcome frequencies.
I personally think that the only "good" definition (I'll specify this more at the end) is that a probability of should occur one in four times in the relevant reference class. I've previously called this view "generalized frequentism", where we use the idea of repeated experiments to define probabilities, but generalizes the notion of "experiment" to subsume all instances of an agent with incomplete information acting in the real world (hence subsuming the definition as subjective confidence).
Why do you suddenly substitute the notion of ...
What is going to be done with these numbers? If Sleeping Beauty is to gamble her money, she should accept the same betting odds as a thirder. If she has to decide which coinflip result kills her, she should be ambivalent like a halfer.
Halfer makes sense if you pre-commit to a single answer before the coin-flip, but not if you are making the decisions independently after each wake-up event. If you say heads, you have a 50% chance of surviving when asked on Monday, and a 0% chance of surviving when asked on Tuesday. If you say tails, you have a 50% chance of surviving Monday and a 100% chance of surviving Tuesday.
Betting argument are tangential here.
https://www.lesswrong.com/posts/cvCQgFFmELuyord7a/beauty-and-the-bets
The disagreement is how to factorise expected utility function into probability and utility, not which bets to make. This disagreement is still tangible, because the way you define your functions have meaningfull consequences for your mathematical reasoning.
I mean I think the "gamble her money" interpretation is just a different question. It doesn't feel to me like a different notion of what probability means, but just betting on a fair coin but with asymmetric payoffs.
The second question feels closer to actually an accurate interpretation of what probability means.
I would frame the question as "What is the probability that you are in heads-space?", not "What is the probability of heads?". The probability of heads is 1/2, but the probability that I am in heads-space, given I've just experiences a wake-up event, is 1/3.
The wake-up event is only equally likely on Monday. On Tuesday, the wake-up event is 0%/100%. We don't know whether it is Tuesday or not, but we know there is some chance of it being Tuesday, because 1/3 of wake-up events happen on Tuesday, and we've just experienced a wake-up event:
P(Monday|wake-up) = 2/3
P(Tuesday|wake-up) = 1/3
P(Heads|Tuesday) = 0/1
P(Heads|Monday) = 1/2
P(Heads|wake-up) = P(Heads|Monday) * P(Monday|wake-up) + P(Heads|Tuesday) * P(Tuesday|wake-up) = 1/3
Thirder here (with acknowledgement that the real answer is to taboo 'probability' and figure out why we actually care)
The subjective indistinguishability of the two Tails wakeups is not a counterargument - it's part of the basic premise of the problem. If the two wakeups were distinguishable, being a halfer would be the right answer (for the first wakeup).
Your simplified example/analogies really depend on that fact of distinguishability. Since you didn't specify whether or not you have it in your examples, it would change the payoff structure.
I'll also note you are being a little loose with your notion of 'payoff'. You are calculating the payoff for the entire experiment, whereas I define the 'payoff' as being the odds being offered at each wakeup. (since there's no rule saying that Beauty has to bet the same each time!)
To be concise, here's my overall rationale:
Upon each (indistinguishable) wakeup, you are given the following offer:
If you believe T yields a higher EV, then you have a credence
You get a positive EV for all N up to 2, so . Thus you should be a thirder.
Here's a clarifying example where this interpretation becomes more useful than yours:
The experimenter flips a second coin. If the second coin is Heads (H2), then N= 1.50 on Monday and 2.50 on Tuesday. If the second coin is Tails, then the order is reversed.
I'll maximize my EV if I bet T when , and H when . Both of these fall cleanly out of 'thirder' logic.
What's the 'halfer' story here? Your earlier logic doesn't allow for separate bets on each awakening.
The question "What is the probability of Heads?" is about the coin, not about your location in time or possible worlds.
This is, I think, the key thing that those smart people disagree with you about.
Suppose Alice and Bob are sitting in different rooms. Alice flips a coin and looks at it - it's Heads. What is the probability that the coin is Tails? Obviously, it's 0% right? That's just a fact about the coin. So I go to Bob in the other room and and ask Bob what's the probability the coin is Tails, and Bob tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the coin. Since it was already flipped the probability was already either 0% or 100%, and maybe if you didn't know which it was you should just say you can't assign a probability or something."
Now, suppose there are two universes that differ only by the polarization of a photon coming from a distant star, due to hit Earth in a few hours. And I go into the universe where that polarization is left-handed (rather than right-handed), and in that universe the probability that the photon is right-handed is 0% - it's just a fact about the photon. So I go to the copy of Carol that lives in this universe and ask Carol what's the probability the photon has right-handed polarization, and Carol tells me it's 50%, and I say "Wrong, you've failed to know a basic fact about the photon. Since it's already on its way the probability was already either 0% or 100%, and maybe if you don't know which it was you should just say you can't assign a probability or something."
Now, suppose there are two universes that differ outside of the room that Dave is currently in, but are the same within Dave's room. Say, in one universe all the stuff outside the room is arranged is it is today in our universe, while in the other universe all the stuff outside the room is arranged as it was ten years ago. And I go into the universe where all the stuff outside the room is arranged as it was ten years ago, which I will shorthand as it being 2014 (just a fact about calendars, memories, the positions of galaxies, etc.), and ask Dave what's the probability that the year outside is 2024, and Dave tells me it's 50%...
I mean I am not convinced by the claim that Bob is wrong.
Bob's prior probability is 50%. Bob sees no new evidence to update this prior so the probability remains at 50%.
I don't favour an objective notion of probabilities. From my OP:
...2. Bayesian Reasoning
- Probability is a property of the map (agent's beliefs), not the territory (environment).
- For an observation O to be evidence for a hypothesis H, P(O|H) must be > P(O|¬H).
- The wake-up event is equally likely under both Heads and Tails scenarios, thus provides no new information to update priors.
- The o
You need to start by clearly understanding that the Sleeping Beauty Problem is almost realistic - it is close to being actually doable. We often forget things. We know of circumstances (eg, head injury) that cause us to forget things. It would not be at all surprising if the amnesia drug needed for the scenario to actually be carried out were discovered tomorrow. So the problem is about a real person. Any answer that starts with "Suppose that Sleeping Beauty is a computer program..." or otherwise tries to divert you away from regarding Sleeping Beauty as a real person is at best answering some other question.
Second, the problem asks what probability of Heads Sleeping Beauty should have on being interviewed after waking. This of course means what probability she should rationally have. This question makes no sense if you think of probabilities as some sort of personal preference, like whether you like chocolate ice cream or not. Probabilities exist in the framework of probability theory and decision theory. Probabilities are supposed to be useful for making decisions. Personal beliefs come into probabilities through prior probabilities, but for this problem, the relevant prior beliefs are supposed to be explicitly stated (eg, the coin is fair). Any answer that says "It depends on how you define probabilities", or "It depends on what reference class you use", or "Probabilities can't be assigned in this problem" is just dodging the question. In real life, you can't just not decide what to do on the basis that it would depend on your reference class or whatever. Real life consists of taking actions, based on probabilities (usually not explicitly considered, of course). You don't have the option of not acting (since no action is itself an action).
Third, in the standard framework of probability and decision theory, your probabilities for different states of the world do not depend on what decisions (if any) you are going to make. The same probabilities can be used for any decision. That is one of the great strengths of the framework - we can form beliefs about the world, and use them for many decisions, rather than having to separately learn how to act on the basis of evidence for each decision context. (Instincts like pulling our hand back from a hot object are this sort of direct evidence->action connection, but such instincts are very limited.) Any answer that says the probabilities depend on what bets you can make is not using probabilities correctly, unless the setup is such that the fact that a bet is offered is actual evidence for Heads versus Tails.
Of course, in the standard presentation, Sleeping Beauty does not make any decisions (other than to report her probability of Heads). But for the problem to be meaningful, we have to assume that Beauty might make a decision for which her probability of Heads is relevant.
So, now the answer... It's a simple Bayesian problem. On Sunday, Beauty thinks the probability of Heads is 1/2 (ie, 1-to-1 odds), since it's a fair coin. On being woken, Beauty knows that Beauty experiences an awakening in which she has a slight itch in her right big toe, two flies are crawling towards each other on the wall in front of her, a Beatles song is running through her head, the pillow she slept on is half off the bed, the shadow of the sun shining on the shade over the window is changing as the leaves in the tree outside rustle due to a slight breeze, and so forth. Immediately on wakening, she receives numerous sensory inputs. To update her probability of Heads in Bayesian fashion, she should multiply her prior odds of Heads by the ratio of the probability of her sensory experience given Heads to the probability of her experience given Tails.
The chances of receiving any particular set of such sensory inputs on any single wakening is very small. So the probability that Beauty has this particular experience when there are two independent wakening is very close to twice that small probability. The ratio of the probability of experiencing what she knows she is experiencing given Heads to that probability given Tails is therefore 1/2, so she updates her odds in favour of Heads from 1-to-1 to 1-to-2. That is, Heads now has probability 1/3.
(Not all of Beauty's experiences will be independent between awakenings - eg, the colour of the wallpaper may be the same - but this calculation goes through as long as there are many independent aspects, as will be true for any real person.)
The 1/3 answer works. Other answers, such as 1/2, do not work. One can see this by looking at how probabilities should change and at how decisions (eg, bets) should be made.
For example, suppose that after wakening, Beauty says that her probability of Heads is 1/2. It also happens that, in an inexcusable breach of experimental protocol, the experimenter interviewing her drops her phone in front of Beauty, and the phone display reveals that it is Monday. How should Beauty update her probability of Heads? If the coin landed Heads, it is certain to be Monday. But if the coin landed Tails, there was only a probability 1/2 of it being Monday. So Beauty should multiply her odds of Heads by 2, giving a 2/3 probability of Heads.
But this is clearly wrong. Knowing that it is Monday eliminates any relevance of the whole wakening/forgetting scheme. The probability of Heads is just 1/2, since it's a fair coin. Note that if Beauty had instead thought the probability of Heads was 1/3 before seeing the phone, she would correctly update to a probability of 1/2.
Some Halfers, when confronted with this argument, maintain that Beauty should not update her probability of Heads when seeing the phone, leaving it at 1/2. But as the phone was dropping, before she saw the display, Beauty would certainly not think that it was guaranteed to show that it is Monday (Tuesday would seem possible). So not updating is unreasonable.
We also see that 1/2 does not work in betting scenarios. I'll just mention the simplest of these. Suppose that when Beauty is woken, she is offered a bet in which she will win $12 if the coin landed Heads, and lose $10 if the coin landed Tails. She know that she will always be offered such a bet after being woken, so the offer does not provide any evidence for Heads versus Tails. If she is woken twice, she is given two opportunities to bet, and could take either, both, or neither. Should she take the offered bet?
If Beauty thinks that the probability of Heads is 1/2, she will take such bets, since she thinks that the expected payoff of such a bet is (1/2)*12-(1/2)*10=1. But she shouldn't take these bets, since following the strategy of taking these bets has an expected payoff of (1/2)*12 - (1/2)*2*10 = -4. In contrast, if Beauty thinks the probability of Heads is 1/3, she will think the expected payoff from a bet is (1/3)*12-(2/3)*10=-2.666... and not take it.
Note that Beauty is a real person. She is not a computer program that is guaranteed to make the same decision in all situations where the "relevant" information is the same. It is possible that if the coin lands Tails, and Beauty is woken twice, she will take the bet on one awakening, and refuse the bet on the other awakening. Her decision when woken is for that awakening alone. She makes the right decisions if she correctly applies decision theory based on the probability of Heads being 1/3. She makes the wrong decision if she correctly applies decision theory with the wrong probability of 1/2 for Heads.
She can also make the right decision by incorrectly applying decision theory with an incorrect probability for Heads, but that isn't a good argument for that incorrect probability.
If the experiment instead was constructed such that:
In this case it is "obvious" that the halfer position is the right choice. So why would it be any different if Sleeping Beauty in the case of tails is awakened on Monday too, since she in this experiment have zero recollection of that event? It does not matter how many other people they have woken up before the day she is woken up, she has NO new information that could update her beliefs.
Or say that the experiment instead was constructed that she for tails would be woken up and interviewed 999999 days in row, would she then say upon being woken up that the probability that the coin landed heads is 1/1000000?
If the first sister's experience is equivalent to the original Sleeping Beauty problem, then wouldn't the second sister's experience also have to be equivalent by the same logic? And, of course, the second sister will give 100% odds to it being Monday.
Suppose we run the sister experiment, but somehow suppress their memories of which sister they are. If they each reason that there's a two-thirds chance that they're the first sister, since their current experience is certain for her but only 50% likely for the second sister, then their odds of i...
If you look over all possible worlds, then asking "did the coin come up Heads or Tails" as if there's only one answer is incoherent. If you look over all possible worlds, there's a ~100% chance the coin comes up as Heads in at least one world, and a ~100% chance the coin comes up as Tails in at least one world.
But from the perspective of a particular observer, the question they're trying to answer is a question of indexical uncertainty - out of all the observers in their situation, how many of them are in Heads-worlds, and how many of them are in Tails-worlds? It's true that there are equally as many Heads-worlds as Tails-worlds - but 2/3 of observers are in the latter worlds.
Or to put it another way - suppose you put 10 people in one house, and 20 people in another house. A given person should estimate a 1/3 chance that they're in the first house - and the fact that 1 house is half of 2 houses is completely irrelevant. Why should this reasoning be any different just because we're talking about possible universes rather than houses?
"What is your credence now for the proposition that the coin landed heads?"
There are three doors. Two are labeled Monday, and one is labeled Tuesday. Behind each door is a Sleeping Beauty. In a waiting room, many (finite) more Beauties are waiting; every time a Beauty is anesthetized, a coin is flipped and taped to their forehead with clear tape. You open all three doors, the Beauties wake up, and you ask the three Beauties The Question. Then they are anesthetized, the doors are shut, and any Beauties with a Heads showing on their foreheads or behind a Tuesday door are wheeled away after the coin is removed from their forehead. The Beauty with a Tails on their forehead behind the Monday door is wheeled behind the Tuesday door. Two new Beauties are wheeled behind the two Monday doors, one with Heads and one with Tails. The experiment repeats.
You observe that Tuesday Beauties always have a Tails taped to their forehead. You always observe that one Monday Beauty has a Tails showing, and one has a Heads showing. You also observe that every Beauty says 1/3, matching the ratio of Heads to Tails showing, and it is apparent that they can't see the coins taped to their own or each other's foreheads or the door they are behind. Every Tails Beauty is questioned twice. Every Heads Beauty is questioned once. You can see all the steps as they happen, there is no trick, every coin flip has 1/2 probability for Heads.
There is eventually a queue of Waiting Sleeping Beauties with all-Heads or all-Tails showing and a new Beauty must be anesthetized with a new coin; the queue length changes over time and sometimes switches face. You can stop the experiment when the queue is empty, as a random walk guarantees to happen eventually, if you like tying up loose ends.
I prefer to just think about utility, rather than probabilities. Then you can have 2 different "incentivized sleeping beauty problems"
In the first case, 1/3 maximizes your money, in the second case 1/2 maximizes it.
To me this implies that in real world analogues to the Sleeping Beauty problem, you need to ask whether your reward is per-awakening or per-world, and answer accordingly
That argument just shows that, in the second betting scenario, Beauty should say that her probability of Heads is 1/2. It doesn't show that Beauty's actual internal probability of Heads should be 1/2. She's incentivized to lie.
EDIT: Actually, on considering further, Beauty probably should not say that her probability of Heads is 1/2. She should probably use a randomized strategy, picking what she says from some distribution (independently for each wakening). The distribution to use would depend on the details of what the bet/bets is/are.
Alternatively I started out confused.
Debating this problem here and with LLMs convinced me that I'm not confused and the thirders are actually just doing epistemological nonsense.
It feels arrogant, but it's not a poor reflection of my epistemic state?
Welcome to the club.
I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.
I suppose my posts are among the ones that you are talking about here?
Hijacking this thread, has anybody worked through Ape in the coat's anthropic posts and understood / gotten stuff out of them? It's something I might want to do sometime in my copious free time but haven't worked up to it yet.
- Rebuttal: This confuses expected value with probability. The betting strategy is optimal due to the asymmetric nature of the payoffs (betting twice on Tails vs. once on Heads), not because Tails is more likely. The underlying probability of the coin flip remains 50/50, regardless of the betting structure.
(This is not a rhetorical question:) What do you mean by "probability" here? A common way of arguing for "having probabilities" is that it's how you make consistent bets -- bets that aren't obviously leaving utility on the table (e.g. Dutch bookable). But you're dismissing arguments of the form [I want to bet like this] -> [therefore my probabilities should be such and such].
I would think that what we're learning is that there's some sort of equivalence principle or something, where it becomes hard to disentangle [I care about my actions in this information-set twice as much] from the allegedly more narrow [This information-set is "truly twice as likely"]. See probutilities.
An answer might be "The world happens to be the case that there pretty strongly tends to be a bunch of stuff that's external to you, which isn't correlated with the size of your information-sets (i.e. how many instances of you there are who you can't distinguish yourself from). That stuff is what we call "reality" and what we have "probabilities" about.". But that doesn't seem like a very fundamental notion, and would break down in some cases [citation needed].
This is not a rhetorical question:) What do you mean by "probability" here?
Yeah, since posting this question:
I have updated towards thinking that it's in a sense not obvious/not clear what exactly "probability" is supposed to be interpreted as here.
And once you pin down an unambiguous interpretation of probability the problem dissolves.
I had a firm notion in mind for what I thought probability meant. But Rafael Harth's answer really made me unconfident that the notion I had in mind was the right notion of probability for the question.
I think the question is underdefined. Some bets are posed once per instance of you, some bets are posed once per instance of a world (whatever that means), etc.
I have read and participated in many of these debates, and it continually frustrates me that people use the word "probability" AS IF it were objective and a property of the territory, when your bayesean tenet, "Probability is a property of the map (agent's beliefs), not the territory (environment)" is binding in every case I can think of. I'm actually agnostic on whether some aspects of the universe are truly unknowable by any agent in the universe, and even more so on whether that means "randomness is inherent" or "randomness is a modeling tool". Yes, this means I'm agnostic on MWI vs Copenhagen, as I can't define "true" on that level (though I generally use MWI for reasoning, as I find it easier. That framing helps me remember that it's a modelling choice, not a fact about the universe(s).
In practice, probability is a modeling and prediction tool, and works pretty much the same for all kinds of uncertainty: contingent (which logically-allowed way does this universe behave), indexical (which set of possible experiences in this universe am I having) and logical (things that must be so but I don't know which way). There are probably edge cases where the difference between these matter, but I don't know of any that I expect to be resolved by foreseeable humans or our creations.
My pretty strong belief is that 1/2 is easier to explain and work with - the coin is fair and Beauty has no new information. And that 1/3 is justified if you are predicting "weight" of experience, and the fact that tails will be experienced twice as often. But mostly I'm rather sure that anyone who believes that their preference is the right model is in the wrong (on that part of the question).
They're "doing epistemology wrong" no more than you. Thinking either choice is best is justified. Thinking the other choice is wrong is itself wrong.
So how do you actually use probability to make decisions? There's a well-established decision theory that takes probabilities as inputs, and produces a decision in some situation (eg, a bet). It will (often) produce different decisions when given 1/2 versus 1/3 as the probability of Heads. Which of these two decisions should you act on?
So how do you actually use probability to make decisions?
I think about what model fits the needs, roughly multiply payouts by probability estimates, then do whatever feels right in the moment.
I’m not sure that resolves any of these questions, since choice of model for different purposes is the main crux.
But the whole point of using probability to express uncertainty about the world is that the probabilities do not depend on the purpose.
If there are N possible observations, and M binary choices that you need to make, then a direct strategy for how to respond to an observation requires a table of size NxM, giving the actions to take for each possible observation. And you somehow have to learn this table.
In contrast, if the M choices all depend on one binary state of the world, you just need to have a table of probabilities of that state for each of the N observations, and a table of the utilities for the four action/state combinations for the M decisions - which have size proportional to N+M, much smaller than NxM for large N and M. You only need to learn the N probabilities (perhaps the utilities are givens).
And in reality, trying to make decisions without probabilities is even worse than it seems from this, since the set of decisions you may need to make is indefinitely large, and the number of possible observations is enormous. But avoiding having to make decisions by a direct observation->action table requires that probabilities have meaning independent of what decision you're considering at the moment. You can't just say that it could be 1/2, or could be 1/3...
probabilities do not depend on the purpose.
I think this is a restatement of the crux. OF COURSE the model chosen depends on the purpose of the model. For probabilities, the choice of reference class for a given prediction/measurement is key. For Sleeping Beauty specifically, the choice of whether an experientially-irrelevant wakening (which is immediately erased and has no impact) is distinct from another is a modeling choice.
Either choice for probability modeling can answer either wagering question, simply by applying the weights to the payoffs if it's not already part of the probability
Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.
To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose, or how you happen to feel about the problem today", but is instead definitely, no doubt about it, 1/3, consider the following possibility:
Upon wakening, Beauty see that there is a plate of fresh muffins beside her bed. She recognizes them as coming from a nearby cafe. She knows that they are quite delicious. She also knows that, unfortunately, the person who makes them on Mondays puts in an ingredient that she is allergic to, which causes a bad tummy ache. Muffins made on Tuesday taste the same, but don't cause a tummy ache. She needs to decide whether to eat a muffin, weighing the pleasure of their taste against the possibility of a subsequent tummy ache.
If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4, whereas if she thinks the probability of Heads is 1/3, she will think the probability that it is Monday is (1/3)+(1/2)*(2/3)=2/3. Since 3/4 is not equal to 2/3, she may come to a different decision about whether to eat a muffin if she thinks the probability of Heads is 1/2 than if she thinks it is 1/3 (depending on how she weighs the pleasure versus the pain). Her decision should not depend on some arbitrary "reference class", or on what bets she happens to be deciding whether to make at the same time. She needs a real probability. And on various grounds, that probability is 1/3.
Sure. By tweaking your "weights" or other fudge factors, you can get the right answer using any probability you please. But you're not using a generally-applicable method, that actually tells you what the right answer is. So it's a pointless exercise that sheds no light on how to correctly use probability in real problems.
Completely agree. The general applicable method is:
Naturally, this produce answer 1/2 for the Sleeping Beauty problem.
If Beauty thinks the probability of Heads is 1/2, she presumably thinks the probability that it is Monday is (1/2)+(1/2)*(1/2)=3/4
This is a description of Lewisian Halfism reasoning, that in incorrect for the Sleeping Beauty problem
I describe the way the Beauty is actually supposed to reason about betting scheme on a particular day here.
She needs a real probability.
Indeed. And real probability domain of function is event space, consisting of properly defined events for the probability experiment. "Today is Monday" is ill-defined in the Sleeping Beauty setting. Therefore it can't have probability.
[ bowing out after this - I'll read responses and perhaps update on them, but probably won't respond (until next time) ]
To see that the probability of Heads is not "either 1/2 or 1/3, depending on what reference class you choose
I disagree. Very specifically, it's 1/2 if your reference class is "fair coin flips" and 1/3 if your reference class is "temporary, to-be-erased experience of victims with adversarial memory problems".
If your reference class is "wakenings who are predicting what day it is", as the muffin variety, then 1/3 is a bit easier to work with (though you'd need to specify payoffs to explain why she'd EVER eat the muffin, and then 1/2 becomes pretty easy too). This is roughly equivalent to the non-memory-wiping wager: I'll flip a fair coin, you predict heads or tails. If it's heads, the wager will be $1, if it's tails, the wager is $2. The probability of tails is not 2/3, but you'd pay up to $0.50 to play, right?
OK, I'll end by just summarizing that my position is that we have probability theory, and we have decision theory, and together they let us decide what to do. They work together. So for the wager you describe above, I get probability 1/2 for Heads (since it's a fair coin), and because of that, I decide to pay anything less than $0.50 to play. If I thought that the probability of heads was 0.4, I would not pay anything over $0.20 to play. You make the right decision if you correctly assign probabilities and then correctly apply decision theory. You might also make the right decision if you do both of these things incorrectly (your mistakes might cancel out), but that's not a reliable method. And you might also make the right decision by just intuiting what it is. That's fine if you happen to have good intuition, but since we often don't, we have probability theory and decision theory to help us out.
One of the big ways probability and decision theory help is by separating the estimation of probabilities from their use to make decisions. We can use the same probabilities for many decisions, and indeed we can think about probabilities before we have any decision to make that they will be useful for. But if you entirely decouple probability from decision-making, then there is no longer any basis for saying that one probability is right and another is wrong - the exercise becomes pointless. The meaningful justification for a probability assignment is that it gives the right answer to all decision problems when decision theory is correctly applied.
As your example illustrates, correct application of decision theory does not always lead to you betting at odds that are naively obtained from probabilities. For the Sleeping Beauty problem, correctly applying decision theory leads to the right decisions in all betting scenarios when Beauty thinks the probability of Heads is 1/3, but not when she thinks it is 1/2.
[ Note that, as I explain in my top-level answer in this post, Beauty is an actual person. Actual people do not have identical experiences on different days, regardless of whether their memory has been erased. I suspect that the contrary assumption is lurking in the background of your thinking that somehow a "reference class" is of relevance. ]
If the SB always guesses heads, she'll be correct of the time. For that reason, that is her credence.
Preamble
Motivation
I was recently introduced to the canonical Sleeping Beauty problem and initially was a halfer but confused. Or more like I thought the halfer position was correct, but smart people seemed to be thirders and I was worried I was misunderstanding something about the problem, or confused myself or similar.
I debated the problem extensively on the LW Discord server and with some LLMs and strongly updated towards "thirders are just engaging in gross epistemic malpractice".
A message I sent in the LW server:
I still have some meta level uncertainty re: the nonsense allegations.
I want to be convinced that the thirder position is not nonsense and there is a legitimate disagreement/debate to be had here.
I have read some of the LW posts on the canonical problem here. I won't be linking them due to laziness.
I have not yet read Joe Carlsmith's blog posts or Nick Bostrom's book as at posting this question. I'll probably be listening to them after posting the question.
I asked Sonnet 3.5 to distill my position/rejections from our debate and below is its summary[1]
Comprehensive Position on the Sleeping Beauty Problem
1. Core Position
2. Bayesian Reasoning
3. Frequentist Critique
Key Issues with Frequentist Approach:
Misunderstanding Indistinguishable Events:
Conflating Processes with Outcomes:
Misapplying Frequentist Logic:
Ignoring Problem Structure:
Counterargument to Thirder Position:
4. Self-Locating Beliefs
5. Anthropic Reasoning Rejection
Expanded Argument:
Specific Anthropic Argument and Counterargument:
6. Distinguishability vs. Probability
7. Betting Strategies vs. Probabilities
Expanded Argument:
Specific Example:
Analogy to Clarify:
Counterargument to Thirder Position:
8. Counterfactuals and Different Problems
9. Information Relevance
10. Epistemological Stance
11. Common Thirder Arguments Addressed
12. Meta-level Considerations
13. Openness to Counter-Arguments
This position maintains that the Sleeping Beauty problem, when correctly analyzed using Bayesian principles, does not provide any new information that would justify updating the prior 50/50 probability of the coin flip. It challenges readers to present counter-arguments that do not rely on commonly rejected reasoning patterns and that strictly adhere to Bayesian updating based on genuinely new, discriminatory evidence.
Closing Remarks
I am probably unjustified in my arrogance.
Some people who I strongly respect (e.g. Nick Bostrom) are apparently thirders.
This is IMO very strong evidence that I am actually just massively misunderstanding something or somehow mistaken here (especially as I have not yet engaged with Nick Bostrom's arguments as at the time of writing this post).
On priors I don't really expect to occupy an (on reflection endorsed) epistemic state where I think Nick Bostrom is making a basic epistemology mistake.
So I expect this is a position I can be easily convinced out of/I myself am misunderstanding something fundamental about the problem.
I made some very light edits to the probability/odds treatment in point 7 to resolve factual inaccuracies. ↩︎