This is the ninth post in my series on Anthropics. The previous one is The Solution to Sleeping Beauty.  The next one is Semantic Disagreement of Sleeping Beauty Problem.

Introduction

There are some quite pervasive misconceptions about betting in regards to the Sleeping Beauty problem.

One is that you need to switch between halfer and thirder stances based on the betting scheme proposed. As if learning about a betting scheme is supposed to affect your credence in an event.

Another is that halfers should bet at thirders odds and, therefore, thirdism is vindicated on the grounds of betting. What do halfers even mean by probability of Heads being 1/2 if they bet as if it's 1/3?

In this post we are going to correct them. We will understand how to arrive to correct betting odds from both thirdist and halfist positions, and why they are the same. We will also explore the core problems with betting arguments as a way to answer probability theory problems and, taking those into account, manage to construct several examples showing the superiority of the correct halfer position in Sleeping Beauty.

Different Probabilities for Different Betting Schemes?

The first misconception has even found its way to the Less Wrong wiki:

If Beauty's bets about the coin get paid out once per experiment, she will do best by acting as if the probability is one half. If the bets get paid out once per awakening, acting as if the probability is one third has the best expected value.

It originates from the fact that there are two different scoring rules, counting per experiment and per awakening. If we aggregate using the per experiment rule, we get P(Heads) = 1/2 - probability that the coin is Heads in a random experiment. If we aggregate using the per awakening rule we get P(Heads) = 1/3 - probability that the coin is Heads in a random awakening. The grain of truth is that you indeed can use this as a quick heuristic for the correct betting odds.

However, as I've shown in the previous post, only the former probability is mathematically sound for the Sleeping Beauty problem, because awakenings do not happen at random. So, it would've been very strange if we really needed to switch to a wrong model to get the correct answer in some betting schemes. Beyond a quick and lossy heuristic, it would be a very bad sign if we were unable to get the optimal betting odds from the correct model.

It would mean that there is something wrong with it, that we didn't really answer the question fully and now are just rationalizing as all the previous philosophers who endorsed a solution, contradicting probability theory, and then came up with some clever reasoning why it's fine.

And of course, we do not actually need to do that. As a matter of fact, even thirders - people who are mistaken about the answer in the Sleeping Beauty - can totally deal with both per experiment and per awakening bets.

Let  be the utility gained due to the realization of event X. Then we can calculate expected utility of a bet on X as:

where  - mutually exclusive events with  

Thirder Per Awakening Betting

Let's start with the natural-to-them per awakening betting scheme:

On every awakening the beauty can bet on the result of the coin toss. What betting odds should she accept?

In this betting scheme both Tails awakenings are equally rewarded, so

According to thirder models:

, therefore:

Solving  for  we get:

Which means that the utility gained from realization of Heads should be at least twice as big as the utility of realization of Tails, so that betting on Heads wasn't net negative. 

And thus betting odds should be 1:2

Thirder Per Experiment Betting

Now, let's look into per experiment betting

The beauty can bet on the result of the coin toss while she is awakened only once per experiment. What betting odds should she accept?

From the position of thirders, this situation is a bit trickier. Here either , or  is zero, as betting on one of the Tails awakenings doesn't count. Their sum, however is constant.

, taking it into account:

Solving   for  we get

Which means 1:1 betting odds.

Do Halfers Need to Bet on Thirders Odds?

The result from the previous section isn't exactly a secret. It even led to a misconception that halfers have to bet on thirders' odds, and therefore betting arguments validate thirdism.

Now, it has to be said that correctly reasoning halfers indeed have to bet on the same odds as thirders - 1:1 for per experiment betting and 1:2 for per awakening betting. But this is in no way a validation of thirdism; halfers have as much claim for these odds as thirders. It's only an unfortunate occurrence, that they happened to be initially called "thirders odds".

Historically, the model most commonly associated with answering that P(Heads)=1/2 is Lewis's one. When people were comparing it and thirder models, they named the odds that the former produces to be "halfer odds" and the odds that the latter produces to be "thirder odds". Which was quite understandable at the time.

Now we know that Lewis's model is a wrong representation for halfism in Sleeping Beauty, and indeed fails to produce correct betting odds for reasons explored in previous posts. The correct halfer model, naturally, doesn't have such problems. But the naming already stuck, confusing a lot of people along the way.

Halfer Per Awakening Betting

Let's see it for ourselves, which odds the correct model recommends. Starting from the per awakening betting scheme.

On every awakening the beauty can bet on the result of the coin toss. What betting odds should she accept?

 - are all different names for the same outcome, as we remember, so

On the other hand, both  and  awakenings are rewarded when Tails, so

Solving for :

Just as previously, we got 1:2 betting odds.

This situation is essentially making a bet on an outcome of a coin toss, and then the same bet has to be repeated if the coin comes Tails. Betting on 1:2 odds doesn't say anything about the unfairness of the coin or having some new knowledge about its state. Instead, it's fully described by the unfairness of the betting scheme which rewards Tails outcomes more.

Halfer Per Experiment Betting

Now, let's check the per experiment betting scheme

The beauty can bet on the result of the coin toss while she is awakened only once per experiment. What betting odds should she accept?

This time Tails outcome isn't rewarded twice so, everything is trivial

So if :

And we have 1:1 betting odds. Easy as that.

Betting Odds Are a Poor Proxy For Probabilities

Why do models claiming that probabilities are different produce the same betting odds? That doesn't usually happen, does it?

Because betting odds depend on both probabilities and utilities of the events. Usually we are dealing with situations when utilities are fixed, so probabilities are the only variable, therefore, when two models disagree about probabilities, they disagree about betting as well.

But in Sleeping Beauty problem, the crux of disagreement is how to correctly factorize the product . What happens when the Beauty has extra awakenings and extra bets?  One approach is to modify the utility part. The other - to modify probabilities. 

I've already explained why the first one is correct - probabilities follow specific rules according to which they are lawfully modified, so that they keep preserving the truth. But for the sake of betting it doesn't appear to matter. 

Betting odds do not have to follow Kolmogorov's third axiom. 10:20 odds are as well defined as 1:2. It's just a ratio, you can always renormalize it, which you can't do to probabilities. You can define a betting scheme that ignores the condition of mutual exclusiveness of the outcomes, which is impossible when you define a sample space. Betting odds are an imperfect approximation of probability, that cares only about frequencies of events and not their other statistical properties

This is why incorrect thirder models manage to produce correct betting odds. All the reasons for why these models are wrong do not matter anymore, when only betting is concerned. And this is why betting is a poor proxy for probabilities - it ignores or obfuscates a lot of information. 

For quite some time I've been arguing that we can't reduce probability theory to decision theory. That while decision making and betting is an obvious application of probability, it's not its justification. That all such attempts are backwards, confused thinking.

The Sleeping Beauty problem is a great example how simply thinking in terms of betting can lead people astray. People found models that produce correct betting odds and got stuck with them, not thinking further, believing that all the math work is done and they now just need to come up with some philosophical principle justifying the models.

And so the "Shut Up and Calculate" crowd happened to silently compute nonsense.

If a probabilistic model produces incorrect betting odds it's clearly wrong. But if it produces correct odds, it still doesn't mean that it's the right one! Betting is a required but not a necessary condition. You also need to account for theoretical properties of probabilities which are not captured by it.

If I didn't resolve it in a previous post we would've been in a conundrum, still thinking that both models are valid. It's good that now we know better. And yet, there is an interesting question: can we still, somehow, despite all the aforementioned problems, come up with a decision theoretic argument distinguishing between thirdism and the correct version of halfism?

As a matter of fact, I can even present you two of them.

Utility Instability under Thirdism

The reason why in most cases disagreement about probabilities implies disagreement about bets is that we assume, that while probabilities change based on available evidence, the utilities of events are constant and defined by the betting scheme. However, this is not the case with Thirdism in Sleeping Beauty, which not only implies constant shifts in utilities throughout the experiment but also that these shifts can go backwards in time.

Let's investigate what probabilities are assigned to coin being Heads on Sunday - before the experiment started, on awakening during the experiment and on Wednesday - when the experiment ended. The correct model is very straightforward in this regard: 

Updateless and Updating Thirder models do not agree which is the correct probability for P(Heads|Sunday), but let's use common sense and accept that it's 1/2 as it should be for a fair coin toss. Therefore: 

Suppose that the Beauty made a bet on Sunday at 1:1 odds, that the coin will come Heads. The bet is to be resolved on Wednesday when the outcome of the coin toss is publicly announced. What does she think about this bet when she awakes during the experiment? If she follows the correct halfer model - everything is fine. She keeps thinking that the bet is neutral in utility.

But a thirder Beauty suddenly find herself in a situation where she is more confident that the coin came Tails when she used to be. How is she supposed to think this? Should she regret the bet and wish she never made it?

This is the usual behavior in such circumstances. Consider the Observer Sleeping Beauty Problem. There:

 and 

The observer is neutral about a bet on Heads at 1:1 odds on Sunday, but if then they find that the Beauty is awakened on their work day, they would regret the bet. If they were proposed to pay a minor fee to consider the bet null and void, they are better off to do it.

Would Sleeping Beauty also be better off to abolish the bet for a minor fee? No, of course not. That would lead to always paying the fee, thus predictably losing money in every experiment. But how is thirder supposed to persuade herself not to agree?

Mathematically, abolishing such a bet is isomorphic to making an opposite bet at the same odds. And as we already established, making one per experiment bet at 1:1 odds is utility neutral, so a minor fee will be a deal breaker. Thirder's justification for it is that the utility of such bet is halved on Tails, because only one of the Tails outcomes is rewarded.

But it means that a thirder Beauty should think as if the fact of her awakening in the experiment retroactively changes the utility of a bet that she has already made! Instead of changing neither probabilities nor utilities, thirdism modifies both in a compensatory way. 

A similar situation happens when the Beauty makes a bet during the experiment and then reflects on it on Wednesday. Halfer Beauty doesn't change her mind in any way, while thirder Beauty has to retroactively modify utilities of the previous bets to compensate for the back and forth changes of her probability estimates.

Which is just an unnecessarily complicated and roundabout way to arrive to the same conclusion as the correct halfer model. It doesn't bring any advantages, just makes thinking about the problem more confusing.

Thirdism Ignores New Evidence

We already know that Thirdism updates probability estimate in spite of receiving no new evidence. But there is an opposite issue with it as well. It refuses to acknowledge actual relevant evidence, which may lead to confusion and suboptimal bets.

To see this let's investigate two modified settings, where the Beauty actually receives some kind of evidence on awakening.

Technicolor Sleeping Beauty

Technicolor Sleeping Beauty is a version of the original problem that I've encountered in Rachael Brigg's Putting a Value on Beauty, where the idea was credited to Titelbaum.

The modified setting can be described as this:

Sleeping Beauty experiment, but every day the room that the Beauty is in changes its color from Red to Blue or vise versa. The initial color of the Room is determined randomly with equal probability for Red and Blue

Ironically enough, Briggs argues that Technicolor Sleeping Beauty presents an argument in favor of thirdism, because halfer Sleeping Beauty apparently changes her estimate of P(Heads), despite the fact that the color of the room "tells Beauty nothing about the outcome of the coin toss". But this is because she is begging the question, assuming that thirders' approach is correct to begin with.

Let's start from how thirders perceive the Technicolor problem. Just as Briggs claims, from their perspective, it seems completely isomorphic to the Sleeping Beauty. They believe that the color of the room is irrelevant to the outcome of the coin toss.

And so thirder Beauty has the same probability estimate for Technicolor Sleeping Beauty as the regular one.

Which means the same betting odds. 1:2 for per awakening betting and 1:1 for per experiment one. Right?

And so, suppose that Beauty, while going through Technicolor variant is proposed to make one per experiment bet on Heads or Tails with odds in between 1:2 and 1:1, for example, 2:3. Should she always refuse the bet?

Take some time to think about this.

.

.

.

.

.

No, really, it's a trick question. Think about it for at least a couple of minutes before answering.

.

.

.

.

.

Okay, if despite the name and introduction of this section and two explicit warnings, you still answered "Yes, the Beauty should always refuse to bet at these odds", then congratulations!

You were totally misled by thirdism!

The correct answer is that there is a better strategy than always refusing the bet. Namely: choose either Red or Blue beforehand and bet Tails only when you see that the room is in this color. This way the Beauty bets 50% of time when the coin is Heads and every time when it's Tails, which allows her to systematically win money at 2:3 odds.

This strategy is obscured from thirders but is obvious for a Beauty that follows the correct, halfer model. She is fully aware that Tails&Monday awakening is always followed by Tails&Tuesday awakening and so she is completely certain to observe both colors when the coin is Tails:

So now she can lawfully construct the Frequency Argument and update. For example, if the Beauty selected Red and sees it:

Therefore, the Beauty is supposed to accept 1:2 odds for per experiment betting.

Or, alternatively, she can bet every time that the room is blue. The nature of probability update is the same. The important part is that she has to precommit to a strategy where she bets on one color and doesn't bet on the other.

 

Rare Event Sleeping Beauty

There is another modification of Sleeping Beauty with a similar effect.

Sleeping Beauty experiment but the Beauty has access to a fair coin - not necessary the one that determined her awakening routine - or any other way to generate random events.

It may seem that whether the Beauty has a coin or not is completely irrelevant to probability of generally speaking a different coin to come Heads, when it was tossed to determine the Beauty's awakening routine. Once again, this is how thirders usually think about such a problem. And once again, this is incorrect.

Suppose the Beauty tosses a coin several times on every awakening. And suppose she observes a particular combination of Heads and Tail -  . Observing  is more likely when the initial coin came Tails and the Beauty had two awakenings, therefore, two attempts to observe this combination. 

Let  be probability to observe the combination , and  - the probability to observe the combination  from two independent tries

We can notice that as  

Therefore, if the Beauty can potentially observe a rare event at every awakening, for instance, a specific combination , when she observes it, she can construct the Approximate Frequency Argument and update in favor of Tails:

Just like in Technicolor Sleeping Beauty, it presents a strategy allowing to net win while betting per experiment at odds between 1:2 and 1:1. A strategy that eludes thirders who apparently have already "updated on awakening", thus missing the situation where they actually were supposed to update.

Now there is a potential confusion here. Doesn't Beauty always observe some rare event? Shouldn't she, therefore, always update in favor of Tails? Try to resolve it yourself. You have all the required pieces of the puzzle.

.

.

.

.

.

.

The answer is that no, of course, she should not. The confusion is in not understanding the difference between a probability of observing a specific low probable event and probability of observing any low probable event.  If the Beauty always observes an event it's probability by definition is 1 and, therefore, she can't construct the Approximate Frequency Argument. We can clearly see that as .

And this is additionally supported by the betting argument in Rare Event versions of Sleeping Beauty. When the Beauty actually observes a rare event, she can systematically win money in per experiment bets with 2:3 odds, and when she does not observe a rare event, she can't.

Conclusion

So, now we can clearly see that thirdism in Sleeping Beauty does not have any advantages in regards to betting. On the contrary, its constant shifts of utilities and probabilities only obfuscate the situations where the Beauty actually receives new evidence and, therefore, has to change her betting strategy.

The correct model, however, successfully deals with every being scheme and derivative problems such as Technicolor and Rare Event Sleeping Beauty.

We can also add a final nail to the coffin of thirdism's theoretical justifications. As we can clearly see, when the Beauty actually receives some evidence allowing her to make a Frequency Argument, it leads to changes in her per experiment optimal  betting strategy - contrary to what Updating model claims.

I think, we are fully justified to discard thirdism all together and simply move on, as we have resolved all the actual disagreements. And yet we will linger for a little while. Because even though thirdism is definitely not talking about probabilities and credences that a rational agent supposed to have, it is still talking about something and it's a curious question - what exactly has it been talking about all this time, that people misinterpreted as probabilities.

In the next post we will find the answer to this question and, therefore, dissolve the last, fully semantic disagreement between halfism and thirdism.

The next post in the series is Semantic Disagreement of Sleeping Beauty Problem.

New Comment
31 comments, sorted by Click to highlight new comments since:

The correct answer is that there is a better strategy than always refusing the bet. Namely: choose either Red or Blue beforehand and bet Tails only when you see that the room is in this color. This way the Beauty bets 50% of time when the coin is Heads and every time when it's Tails, which allows her to systematically win money at 2:3 odds.

 

You place $200 down, and receive $300 if the coin was indeed tails.

If the coin toss ends up heads, you have a 50% chance of losing $200 - expected utility is $-100.

If the coin toss is tails, you have a %100 chance of gaining $100 - expected utility is $100.

So you end up with expected 0 utility.

The point stands, but the odds have to be better than 2:3.

The central point of the first half or so of this post  - that for E(X) = P(X)U(X) you could choose different P and U for the same E so bets can be decoupled from probabilities - is a good one.

I would put it this way: choices and consequences are in the territory*; probabilities and utilities are in the map.

Now, it could be that some probability/utility breakdowns are more sensible than others based on practical or aesthetic criteria, and in the next part of this post ("Utility Instability under Thirdism") you make an argument against thirderism based on one such criterion.

However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct. If Sleeping Beauty is asked before the coin toss to bet based on the same reward structure as after the toss she will bet the same way in each case - i.e. Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.

Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.

Then, the "technicolor sleeping beauty" part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very "Halfer" assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.  

Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.

I think, we are fully justified to discard thirdism all together and simply move on, as we have resolved all the actual disagreements.

uh....no. But I do look forward to your next post anyway.

*edit: to be more correct, they're less far up the map stack than probability and utilities. Making this clarification just in case someone might think from that statement that I believe in free will (I don't).

Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo. 

As I show in the first part of the post, thirder odds are the exact same thing as halfer odds 1:2 per awakening and 1:1 per experiment.

However, your claim that Thirder Sleeping Beauty would bet differently before and after the coin toss is not correct.

I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:

Mathematically, abolishing such a bet is isomorphic to making an opposite bet at the same odds. And as we already established, making one per experiment bet at 1:1 odds is utility neutral, so a minor fee will be a deal breaker. Thirder's justification for it is that the utility of such bet is halved on Tails, because only one of the Tails outcomes is rewarded.

But it means that a thirder Beauty should think as if the fact of her awakening in the experiment retroactively changes the utility of a bet that she has already made! Instead of changing neither probabilities nor utilities, thirdism modifies both in a compensatory way. 

I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post.

Thirder Sleeping Beauty will bet Thirder odds even before the experiment starts, if the coin toss being bet on is particularly the one in this experiment and the reward structure is such that she will be rewarded equally (as assessed by her utility function) for correctness in each awakening.

Now, maybe you find this dependence on what the coin will be used for counterintuitive, but that depends on your own particular taste.

Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.

Then, the "technicolor sleeping beauty" part seems to make assumptions where the reward structure is such that it only matters whether you bet or not in a particular universe and not how many times you bet. This is a very "Halfer" assumption on reward structure, even though you are accepting Thirder odds in this case! Also, Thirders can adapt to such a reward structure as well, and follow the same strategy.  

Some reward structures feels more natural for halfers and some for thirders - this is true. But good model for a problem is supposed to deal with any possible betting scheme without significant difficulties. Thirders probably can arrive to the correct answer post hoc, if explicitly primed by a question: "what odds are you supposed to bet if you bet only when the room is red?". But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty. Nothing about their probabilistic model hints them that betting only when the room is red is the correct move. Their probability estimate is the same, despite new evidence about the state of the coin toss and so they are oblivious that there is a better strategy then always refusing the bet.

Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism - in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam's razor, if we haven't already discarded it because of its theoretical unsoundness, explained in the previous post.

Finally, on Rare Event Sleeping beauty, it seems to me that you are biting the bullet here to some extent to argue that this is not a reason to favour thirderism.

I'm confused. What bullet am I biting? How can the fact that thirder probabilistic model misses the situation when the per experiment betting odds are actually 1:2 be an argument in favor of thirdism? 

Rare Event problem is such that the answer is about 1/3 only in some small number of cases. Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1/3 as a broken clock.

uh....no.

What do you still feel that is unresolved?

Throughout your comment you've been saying a phrase "thirders odds", apparently meaning odds 1:2, not specifying whether per awakening or per experiment. This is underspecified and confusing category which we should taboo. 

Yeah, that was sloppy language, though I do like to think more in terms of bets than you do. One of my ways of thinking about these sorts of issues is in terms of "fair bets" - each person thinks a bet with payoffs that align with their assumptions about utility is "fair", and a bet with payoffs that align with different assumptions about utility is "unfair".  Edit: to be clear, a "fair" bet for a person is one where the payoffs are such that the betting odds where they break even matches the probabilities that that person would assign.

I do not claim that. I say that in order to justify not betting differently, thirders have to retroactively change the utility of a bet already made:

I critique thirdism not for making different bets - as the first part of the post explains, the bets are the same, but for their utilities not actually behaving like utilities - constantly shifting back and forth during the experiment, including shifts backwards in time, in order to compensate for the fact that their probabilities are not behaving as probabilities - because they are not sound probabilities as explained in the previous post.

Wait, are you claiming that thirder Sleeping Beauty is supposed to always decline the initial per experiment bet - before the coin was tossed at 1:1 odds? This is wrong - both halfers and thirders are neutral towards such bets, though they appeal to different reasoning why.

OK, I was also being sloppy in the parts you are responding to.

Scenario 1: bet about a coin toss, nothing depending on the outcome (so payoff equal per coin toss outcome)

  • 1:1

Scenario 2: bet about a Sleeping Beauty coin toss, payoff equal per awakening

  • 2:1 

Scenario 3: bet about a Sleeping Beauty coin toss, payoff equal per coin toss outcome 

  • 1:1

It doesn't matter if it's agreed to before or after the experiment, as long as the payoffs work out that way. Betting within the experiment is one way for the payoffs to more naturally line up on a per-awakening basis, but it's only relevant (to bet choices) to the extent that it affects the payoffs.

Now, the conventional Thirder position (as I understand it) consistently applies equal utilities per awakening when considered from a position within the experiment.

I don't actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well. 

As I see it, Thirders will only regret a bet (in the sense of considering it a bad choice to enter into ex ante given their current utilities) if you do some kind of bait and switch where you don't make it clear what the payoffs were going to be up front.

 But what I'm pointing at, is that thirdism naturally fails to develop an optimal strategy for per experiment bet in technicolor problem, falsly assuming that it's isomorphic to regular sleeping beauty.

Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure? - note that if you don't make clear what the reward structure is, Thirders are more likely to misunderstand the question asked if, as in this case, the reward structure is "fair" from the Halfer perspective and "unfair" from the Thirder perspective).

Technicolor and Rare Event problems highlight the issue that I explain in Utility Instability under Thirdism - in order to make optimal bets thirders need to constantly keep track of not only probability changes but also utility changes, because their model keeps shifting both of them back and forth and this can be very confusing. Halfers, on the other hand, just need to keep track of probability changes, because their utility are stable. Basically thirdism is strictly more complicated without any benefits and we can discard it on the grounds of Occam's razor, if we haven't already discarded it because of its theoretical unsoundness, explained in the previous post.

A Halfer has to discount their utility based on how many of them there are, a Thirder doesn't. It seems to me, on the contrary to your perspective, that Thirder utility is more stable.

Halfer model correctly highlights the rule how to determine which cases these are and how to develop the correct strategy for betting. Thirder model just keeps answering 1/3 as a broken clock.

... and I in my hasty reading and response I misread the conditions of the experiment (it's a "Halfer" reward structure again). (As I've mentioned before in a comment on another of your posts, I think Sleeping Beauty is unusually ambiguous so both Halfer and Thirder perspectives are viable. But, I lean toward the general perspectives of Thirders on other problems (e.g. SIA seems much more sensible (edit: in most situations) to me than SSA), so Thirderism seems more intuitive to me). 

Thirders can adapt to different reward structures but need to actually notice what the reward structure is! 

What do you still feel that is unresolved?

the things mentioned in this comment chain. Which actually doesn't feel like all that much, it feels like there's maybe one or two differences in philosophical assumptions that are creating this disagreement (though maybe we aren't getting at the key assumptions).

Edited to add: The criterion I mainly use to evaluate probability/utility splits is typical reward structure - you should assign probabilities/utilities such that a typical reward structure seems "fair", so you don't wind up having to adjust for different utilities when the rewards have the typical structure (you do have to adjust if the reward structure is atypical, and thus seems "unfair"). 

This results in me agreeing with SIA in a lot of cases. An example of an exception is Boltzmann brains. A typical reward structure would give no reward for correctly believing that you are a Boltzmann brain. So you should always bet in realistic bets as if you aren't a Boltzmann brain, and for this to be "fair", I set P=0 instead of SIA's U=0.  I find people believing silly things about Boltzmann brains like taking it to be evidence against a theory if that theory proposes that there exists a lot of Boltzmann brains. I think more acceptance of the setting of P=0 instead of U=0 here would cut that nonsense off. To be clear, normal SIA does handle this case fine (that a theory predicting Boltzmann brains is not evidence against it), but setting P=0 would make it more obvious to people's intuitions.

In the case of Sleeping Beauty, this is a highly artificial situation that has been pared down of context to the point that it's ambiguous what would be a typical reward structure, which is why I consider it ambiguous.

One of my ways of thinking about these sorts of issues is in terms of "fair bets"

Well, as you may see it's also is not helpful. Halfers and thirders disagree on which bets they consider "fair" but still agree on which bets to make, whether they call them fair or not. The extra category of a "fair bet" just adds another semantic disagreement between halfers and thirders. Once we specify whether we are talking per experiment or per awakening bet and on which, odds both theories are supposed to agree. 

I don't actually know what the Thirder position is supposed to be from a standpoint from before the experiment, but I see no contradiction in assigning equal utilities per awakening from the before-experiment perspective as well.

Thirders tend to agree with halfers that P(Heads|Sunday) = P(Heads|Wednesday) = 1/2. Likewise, because they make the same bets as the halfers, they have to agree on utilities. So it means that thirders utilities go back and forth which is weird and confusing behavior.

A Halfer has to discount their utility based on how many of them there are, a Thirder doesn't. It seems to me, on the contrary to your perspective, that Thirder utility is more stable

You mean how many awakenings? That if there was not two awakenings on tails, but, for instance, ten, halfers will have to think that U(Heads) has to be ten times as much as U(Tails) for a utility neutral per awakening bet? 

Sure, but it's a completely normal behavior. It's fine to have different utility estimates for different problems and different payout schemes - such things always happen. Sleeping Beauty with ten awakenings on Tails is a different problem than Sleeping Beauty with only two so there is no reason to expect that utilities of the events has to be the same. The point is that as long as we specified the experiment and a betting scheme, then the utilities has to be stable.

And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant - behave the way probabilities are supposed to behave. And that's because they are partially probabilities - a result of incorrect factorization of E(X).

Speculation; have you actually asked Thirders and Halfers to solve the problem? (while making clear the reward structure?

I'm asking it right in the post, explicitly stating that the bet is per experiment and recommending to think about the question more. What did you yourself answer?

My initial state that thirders model confuses them about this per experiment bet is based on the fact that a pro-thirder paper which introduced the technicolor sleeping beauty problem totally fails to understand why halfers scoring rule updates in it. I may be putting to much weight on the views of Rachael Briggs in particular, but it apparently was peer reviewed and so on, so it seems to be decent evidence.

... and I in my hasty reading and response I misread the conditions of the experiment 

Well, I guess that answers my question.

Thirders can adapt to different reward structures but need to actually notice what the reward structure is!

Probably, but I've yet to see one actually derive the correct answer on their own, not post hoc after it was already spoiled or after consulting the correct model. I suppose I should have asked the question beforehand, and then publish the answer, oh well. Maybe I can still do it and ask nicely not to look.

The criterion I mainly use to evaluate probability/utility splits is typical reward structure

Well, if every other thirder reason like this, that would indeed explain the issue. 

You can't base the definition of probability on your intuitions about fairness. Or, rather, you can, but then you are risking contradicting the math. Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.

Well, as you may see it's also is not helpful

My reasoning explicitly puts instrumental rationality ahead of epistemic. I hold this view precisely to the degree which I do in fact think it is helpful.

The extra category of a "fair bet" just adds another semantic disagreement between halfers and thirders. 

It's just a criterion by which to assess disagreements, not adding something more complicated to a model.

Regarding your remarks on these particular experiments:

If someone thinks the typical reward structure is some reward structure, then they'll by default guess that a proposed experiment has that reward structure.

This reasonably can be expected to apply to halfers or thirders. 

If you convince me that halfer reward structure is typical, I go halfer. (As previously stated since I favour the typical reward structure). To the extent that it's not what I would guess by default, that's precisely because I don't intuitively feel that it's typical and feel more that you are presenting a weird, atypical reward structure!

And thirder utilities are modified during the experiment. They are not just specified by a betting scheme, they go back and forth based on the knowledge state of the participant - behave the way probabilities are supposed to behave. And that's because they are partially probabilities - a result of incorrect factorization of E(X).

Probability is a mathematical concept with very specific properties. In my previous post I talk about it specifically and show that thirder probabilities for Sleeping Beauty are ill-defined.

I've previously shown that some of your previous posts incorrectly model the Thirder perspective, but I haven't carefully reviewed and critiqued all of your posts. Can you specify exactly what model of the Thirder viewpoint you are referencing here? (which will not only help me critique it but also help me determine what exactly you mean by the utilities changing in the first place, i.e. do you count Thirders evaluating the total utility of a possibility branch more highly when there are more of them as a "modification" or not (I would not consider this a "modification").

In one of your previous posts you said that 'What Beauty actually learns is that "she is awoken at least once"' and in this post you say "Therefore, if the Beauty can potentially observe a rare event at every awakening, for instance, a specific combination , when she observes it, she can construct the Approximate Frequency Argument and update in favor of Tails."

I think this is a mistake, because when you experience Y during Sleeping Beauty, it is not the same thing as learning that "Y at least once." See this example: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf

Conitzer's example is that 2 coins are flipped on Sunday. Beauty wakes up day 1 and sees coin 1, then wakes up day 2 and sees coin 2. When she wakes up and sees a coin, what is her credence that the coins are the same?

I think everyone would agree that the probability is 1/2. However, suppose she sees tails. If she learns "at least one tails" then the probability of "coins are the same" would be only 1/3. Therefore, even though she can see tails, she did not learn "at least one tails". 

Similarly, if she observes C, that is not the same thing as "C at least once." She learned "C today" which seems like it does not allow updating any probabilities, for all the reasons you have given earlier. So rare events such as a specific sequence of coin flips Beauty knew in advance, should still not allow probability updates.

She learned "C today" which seems like it does not allow updating any probabilities, for all the reasons you have given earlier

It's not that she observed "C today" but it doesn't allow her to update any probabilities because "reasons". The core point of the previous post is that she didn't observe "C today", because "today" is ill-defined for the probability experiment she is participating in. I talk about it in more details in a comment thread starting from here. Pay attention to Markvy's attempts to formally define event "Today is Monday" for the Sleeping Beauty setting and failing to do so, while its easy to do for some other problems like Single Awakening or No-Coin-Toss. 

So rare events such as a specific sequence of coin flips Beauty knew in advance, should still not allow probability updates.

She didn't know the sequence in advance, like, for example, she knew that she is to be awakened on Monday. She made a guess and managed to guess right. The difference is that on a repetition of the probability experiment, she is awakened on Monday in every iteration of it, but the sequence of the tosses is the same as she precommited to only in a smal fraction of all iterations of the experiment.

See this example: https://users.cs.duke.edu/~conitzer/devastatingPHILSTUD.pdf

Conitzer's example is that 2 coins are flipped on Sunday. Beauty wakes up day 1 and sees coin 1, then wakes up day 2 and sees coin 2. When she wakes up and sees a coin, what is her credence that the coins are the same?

I think everyone would agree that the probability is 1/2.

I'm glad that someone engages with non-Lewisian halfism but this is clearly wrong on a very basic level. To understand why, let's consider a simpler problem:

Two coins are tossed. Then you are told state of one of the coins, but you don't know whether it's the first coin or the second. Then you are told whether it was the first coin or the second. What should be your credence that the state of both coins are the same 1. before you were told the state of one of the coins? 2. after you were told the state of one of the coins? 3. after you were told which coin was it?

  1. You have four equiprobable outcomes: HT, TH, HH, TT So the answer is 1/2
  2. Now there are two possibilities. Either you were told "Heads" or you were told "Tails". In first case outcome TT is eliminated, in the second outcome HH is. In any case you end up with only one possible outcome where the states of the coin are the same and two outcomes where they are not. Therefore the answer is 1/3
  3. Here we initially had either HT TH TT or HT TH HH as possible outcomes, in both cases either HT or TH is eliminated. And so we end up with one outcome where coins are the same and one outcome where they are not. Therefore the answer is once again 1/2

These are all completely normal bayesian updates that a person participating in the experiment supposed to make. But if we accept Conitzer's reasoning, we will have to declare them ruining the Reflection principle! 

After all you are certain that after you are shown one of the coins your credence will be 1/3. Why isn't your credence 1/3 to begin with? And then you are certain that after being told which coin it is your credence once again will be 1/2! As Conitzer writes:

This is perhaps the most egregious violation of the Reflection Principle that we have
encountered, because in this case she is not put to sleep and does not have memories
erased as she transitions from one credence to another

Frankly, it seems as if Conitzer forgot that credence can change for reasons not related to memory loss. That one can simply receive new evidence and change their credence based on that. Which is obviously the case in both this simple problem and his version of Sleeping Beauty. In both cases the participant at first doesn't know which is the state of the coin that will be revealed, then recieves this information and lawfully updates credences but still is not aware which coin it was. And then this information is also revealed, therefore the credence is updated once again.

I'm glad that someone engages with non-Lewisian halfism but this is clearly wrong on a very basic level. To understand why, let's consider a simpler problem:

Two coins are tossed. Then you are told state of one of the coins, but you don't know whether it's the first coin or the second. Then you are told whether it was the first coin or the second. What should be your credence that the state of both coins are the same 1. before you were told the state of one of the coins? 2. after you were told the state of one of the coins? 3. after you were told which coin was it? 

 

You are creating a related but different and also complicated problem: the Two Child Problem, which is notoriously ambiguous. "Then you are told the state of one of the coins" can have many meanings.

If I ask the experimenter "choose one of the coins randomly and tell me what it is" then I am not able to update my probability. It will still be 1/2 that the coins are the same.

If I ask the experimenter "is there at least one heads?" then I will be able to update. If they say yes I can update to 1/3, if they say no I can update to 1.

 

Frankly, it seems as if Conitzer forgot that credence can change for reasons not related to memory loss. That one can simply receive new evidence and change their credence based on that.

Conitzer's problem can be simplified further by letting Beauty flip a coin herself on Monday and Tuesday.

She wakes up Monday and flips a coin. She wakes up Tuesday and flips a coin. That's it.

After flipping a coin, what should her credence be that the coin flips are the same? 

Do you disagree now that the answer is 1/2?

I think it is clearly 1/2 precisely because there is no new evidence. The violation of the Reflection Principle is secondary. More importantly, something has gone wrong if we think she can flip a coin and update the probability of the coins being the same. 

 

She didn't know the sequence in advance, like, for example, she knew that she is to be awakened on Monday. She made a guess and managed to guess right. The difference is that on a repetition of the probability experiment, she is awakened on Monday in every iteration of it, but the sequence of the tosses is the same as she precommited to only in a smal fraction of all iterations of the experiment.

I agree, but she doesn't get to observe the sequence of tosses in the experiment. She isn't even able to observe that a sequence of tosses happens "at least once" in the experiment. That's what Conitzer shows in his problem.

She can't update her probability based on observing a rare event C (as you have defined it), because she can't observe C in the first place.

A version without amnesia is not exactly the same situation, but something similar can happen. Suppose the experimenter will flip a coin, on heads they will flip a new sequence of 1000, on tails they will flip 2 new sequences of 1000. I ask the experimenter "randomly choose one of the sequences and tell me the result", and they tell me the result was 1000 heads in a row. A sequence of 1000 heads in a row is more likely to have occurred at least once if they flipped 2 sequences. But this does not allow me to update my probability of the number of sequences, because I have not learned "there is at least one sequence of 1000 heads."

You are creating a related but different and also complicated problem: the Two Child Problem, which is notoriously ambiguous. "Then you are told the state of one of the coins" can have many meanings.

If I ask the experimenter "choose one of the coins randomly and tell me what it is" then I am not able to update my probability. It will still be 1/2 that the coins are the same.

If I ask the experimenter "is there at least one heads?" then I will be able to update. If they say yes I can update to 1/3, if they say no I can update to 1.

What if no one is answering your questions? You are just shown one coin with no insight about the algorithm according to which the experimenter showed you it, other that this is the outcome of the coin toss. There is actually the least presumptious way to reason about such things. And this is the one I described.

But nevermind that. Let's not go on an unnecessary tangent about this problem. For the sake of the argument I'm happy to concede that both 1/2 and 1/3 are reasonable answers to it. However Conitzer's reasoning would imply that only 1/2 is the correct answer. As a matter of fact he simply assumes that 1/2 has to be correct, refusing to entertain the idea that it's not the case. Not unlike Briggs in Technicolor.

Conitzer's problem can be simplified further by letting Beauty flip a coin herself on Monday and Tuesday.

She wakes up Monday and flips a coin. She wakes up Tuesday and flips a coin. That's it.

After flipping a coin, what should her credence be that the coin flips are the same? 

Do you disagree now that the answer is 1/2?

If she just flipped the coin then the answer is 1/2. If she observed the event "the coin is Tails" then the answer is 1/3. If she observed the event "the coin is Heads" the answer is 1/3. But doesn't she always observes one of these events in every iteration of the experiment? No, she doesn't.

This is the same situation as with Technicolor Sleeping Beauty. She observes the event "Blue" instead of "Blue or Red" only when she's configured her event space in a specific way, by precommited to this outcome in particular. Likewise here. When the Beauty precommited to Tails, flips the coin and sees that the coin is indeed Tails she has observed the event "the coin is Tails", when she did no precommitments or the coin turned out to be the other side - she observed the event "the coin is Heads or Tails".

I think it is clearly 1/2 precisely because there is no new evidence. The violation of the Reflection Principle is secondary. More importantly, something has gone wrong if we think she can flip a coin and update the probability of the coins being the same. 

Of course something has gone wrong. This is what you get when you add amnesia to probability theory problems - it messes the event space in a counterintuitive way. By default you are able only to observe the most general events which have probability one. Like "I'm awake at least once in the experiment" or "the room is either Blue or Red at least once in the experiment". To observe more specific events you need precommitments.

To see that this actually works, check the betting arguments for rare event and technicolor sleeping beauty from the post. Likewise we can construct a betting argument for the Conitzer's example, where you go through an iterated experiment and are asked to make one per experiment bet on the fact that both coins did not produce the same outcome, with betting odds a bit worse than 1:1. The optimal betting strategy is not always refusing the bet, as it would've been if the probability actually always was 1/2 and you did not get any new evidence.

She isn't even able to observe that a sequence of tosses happens "at least once" in the experiment.

She is able to to observe that a particular sequence of tosses happens "at least once" in the experiment only if she has precommited to guessing this particular sequence. Otherwise she, indeed does not observe this event.

Rules for per experiment betting seem to be imprecise. What exactly does it mean that Beauty can bet only once per experiment? Does it mean that she is offered the bet only once in case of Tails? If so, is she offered the bet on Monday or Tuesday or is the day randomly selected? Or does it mean that she is offered the bet on both Monday and Tuesday and only one bet counts if she accepts both? If so, which one? Monday bet, Tuesday bet, or is it randomly selected?

Depending on, a Thirder could base his decision on:

P(H/Today is Monday)=1/2, P(H/Today is my last awakening)=1/2, or P(H/Today is the randomly selected day my bet counts/is offered to me)=1/2

and therefore escapes utility instability?

There are indeed ways to obfuscate the utility instability under thirdism by different betting schemes where it's less obvious, as the probability relevant to betting isn't P(Heads|Awake) = 1/3 but one of thoses you meantion which equal 1/2.

The way to define the scheme specifically for P(Heads|Awake), is this: you get asked to bet on every awakening. One agreement is sufficient, and only one agreement counts. No random selecting takes place.

This way the Beauty doesn't get any extra evidence when she is asked to bet, therefore she can't update her credence for the coin being Heads based on the sole fact of being asked to bet, the way you propose.

Sure, if the bet is offered only once per experiment, Beauty receives new evidence (from a thirder‘s perspective) and she could update.

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

In case the bet is offered on every awakening: do you mean if she gives conflicting answers on Monday and Tuesday that the bet nevertheless is regarded as accepted?

Yes I do. 

Of course, if the experiment is run as stated she wouldn't be able to give conflicting answers, so the point is moot. But having a strict algorithm for resolving such theoretical cases is a good thing anyway.

My initial idea was, that if for example only her Monday answer counts and Beauty knows that, she could reason that when her answer counts it is Monday, arriving at the conclusion that it is reasonable to act as if it was Monday on every awakening, thus grounding her answer on P(H/Monday)=1/2. Same logic holds for rule „last awakening counts“ and „random awakening counts“.

Yes, I got it. As a matter of fact this is unlawful. Probability estimate is about the evidence you receive not about what "counts" for a betting scheme. If the Beauty receives the same evidence when her awakening counts and when it doesn't count she can't update her probability estimate. If in order to arrive to the correct answer she needs to behave as if every day is Monday it means that there is something wrong with her model.

Thankfully for thirdism, she does not have to do it. She can just assign zero utility to Tuesday awakening and get the correct betting odds.

Anyway, all this is quite tangental to the question of utility instability. Which is about the Beauty making a bet on Sunday and then reflecting on it during the experiment, even if no bets are proposed. According to thirdism probability of the coin being Heads changes on awakening, so, in order for Beauty not to regret about making an optimal bet on Sunday, her utility has to change as well. Therefore utility instability.

Honestly, I do not see any unlawful reasoning going on here. First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.

My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions. If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today. If she knows that her bet only counts on Monday and her probability model suggests that „Today is Monday“ is relevant for H, then ideal rationality requires her to base her decision on P(H/Monday) cause she knows that Monday is realized when her decision counts. This guarantees that on her Monday awakening when her decision counts, she is calculating the probability for heads based on all relevant evidence that is realized on that day.

It is true that the thirder model does not suggest such a strategy, but suggesting strategies and therefore suggesting which probabilities are relevant for decisions is not the job of a probability model anyway. Similar is the case of the Technicolor Beauty: The strategy „only updating if Red“ is neither suggested nor hinted by your model. All your model suggests are probabilities conditional on the realization of certain events. It can’t tell you to treat the observation „Red room“ as a realization of the event „There is an awakening in a red room“ while treating the observation „Blue room“ merely as a realization of the event „There is an awakening in a red or a blue room“ instead of „There is an awakening in a blue room“. The observation of a blue room is always a realization of both of these events, and it is your strategy „tracking red“ and not your probability model that suggests to prefer one over the other as the relevant evidence to calculate your probabilities. I had been thinking over this for a while after I recently discovered this „Updating only if Red“-strategy for myself and how this strategy could be directly derived from the halfer model. But I honestly see no better justification to apply it than the plain fact that it proves to be more successful in the long run.

First of all, it‘s certainly important to distinguish between a probability model and a strategy. The job of a probability model is simply to suggest the probability of certain events and to describe how probabilities are affected by the realization of other events. A strategy on the other hand is to guide decision making to arrive at certain predefined goals.

Of course. As soon as we are talking about goals and strategies we are not talking about just probabilities anymore. We are also talking about utilities and expected utilities. However, probabilities do not suddenly change because of it. Probabilistic model is the same, there are simply additional considerations as well. 

My point is, that the probabilities a model suggests you to have based on the currently available evidence do NOT neccessarily have to match the probabilities that are relevant to your strategy and decisions.

Whether or not your probability model leads to optimal descision making is the test allowing to falsify it. There are no separate "theoretical probabilities" and "decision making probabilities". Only the ones that guide your behaviour can be correct. What's the point of a theory that is not applicable to practice, anyway?

If your model claims that the probability based on your evidence is 1/3 but the optimal decision making happens when you act as if it's 1/2, then your model is wrong and you switch to a model that claims that the probability is 1/2. That's the whole reason why betting arguments are popular.

If Beauty is awake and doesn‘t know if it is the day her bet counts, it is in fact a rational strategy to behave and decide as if her bet counts today.

Questions of what "counts" or "matters" are not the realm of probability. However, the Beauty is free to adjust her utilities based on the specifics of the betting scheme.

All your model suggests are probabilities conditional on the realization of certain events.

The model says that 

P(Heads|Red) = 1/3 

P(Heads|Blue) = 1/3

but

P(Heads|Red or Blue) = 1/2

Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2/3 of times and someone who bets on Tails only when the room is Blue wins 2/3 of times, while someone who always bet on Tails wins only 1/2 of time.

This leads to a conclusion that observing event "Red" instead of "Red or Blue" is possible only for someone who has been expecting to observe event "Red" in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8.  See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events

„Whether or not your probability model leads to optimal descision making is the test allowing to falsify it.“

Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision. If you can derive a probability from your model and provide a good reason to consider this probability relevant to your decision, your model is not falsified as long you arrive at the right decision. Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision. What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails. Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.

„The model says that  P(Heads|Red) = 1/3  P(Heads|Blue) = 1/3 but P(Heads|Red or Blue) = 1/2 Which obviosly translates in a betting scheme: someone who bets on Tails only when the room is Red wins 2/3 of times and someone who bets on Tails only when the room is Blue wins 2/3 of times, while someone who always bet on Tails wins only 1/2 of time.“

A quick translation of the probabilities is:

P(Heads/Red)=1/3: If your total evidence is Red, then you should entertain probability 1/3 for Heads.

P(Heads/Blue)=1/3: If your total evidence is Blue, then you should entertain probability 1/3 for Heads.

P(Heads/Red or Blue)=1/2: If your total evidence is Red or Blue, which is the case if you know that either red or blue or both, but not which exactly, you should entertain probalitity 1/2 for Heads.

If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence. Ignoring total evidence does not necessarily invalidate the probability model, but it certainly needs justification. Otherwise, by strictly following total evidence your model will let you also run foul of the Reflection Principle, since you will arrive at probability 1/3 in every single experimental run.

Going one step back, with my translation of the conditional probabilities above I have made the implicit assumption that the way the agent learns evidence is not biased towards a certain hypothesis. But this is obviously not true for the Beauty: Due to the memory loss Beauty is unable to learn evidence „Red and Blue“ regardless of the coin toss. This in combination with her sleep on Tuesday if Heads, she is going to learn „Red“ and „Blue“ (but not „Red and Blue“) if Tails while she is only going to learn either „Red“ or „Blue“ if Heads, resulting in a bias towards the Tails-hypothesis.

I admit that P(Heads/Red)=P(Heads/Blue)=1/3, but P(Heads/Red or Blue)=1/2 hints you towards the existence of that information selection bias. However, this is just as little a feature of your model as a flat tire is a feature of your car because it hints you to fix it. It is not your probability model that guides you to adopt the proper betting strategy by ignoring total evidence. In fact, it is just the other way around that your knowledge about the bias guides you to partially dismiss your model. As mentioned above, this does not necessarily invalidate your model, but it shows that directly applying it in certain decision scenarios does not guarantee optimal decisions but can even lead to bad decisions and violating Reflection Principle.

Therefore, as a halfer, I would prefer an updating rule that takes into account the bias and telling me P(Heads/Red)=P(Heads/Blue)=P(Red or Blue)=1/2. While offering me the possibility of a workaround to arrive at your betting sheme. One possible workaround is that Beauty runs a simulation of another experiment within her original Technicolor Experiment in which she is only awoken in a Red room. She can easily simulate that and the same updating rule that tells her P(Heads/Red)=1/2 for the original experiment tells her P(Heads/Red)=1/3 for the simulated experiment.

„This leads to a conclusion that observing event "Red" instead of "Red or Blue" is possible only for someone who has been expecting to observe event "Red" in particular. Likewise, observing HTHHTTHT is possible for a person who was expecting this particular sequence of coin tosses, instead of any combination with length 8.  See Another Non-Anthropic Paradox: The Unsurprising Rareness of Rare Events“

I have already refuted this way of reasoning in the comments of your post.

Sure, I don‘t deny that. What I am saying is, that your probability model don‘t tell you which probability you have to base on a certain decision

It says which probability you have, based on what you've observed. If you observed that it's Monday - you are supposed to use probability conditionally on the fact that it's Monday, if you didn't observe that it's Monday you can't lawfully use the probability conditionally on the fact that it's Monday. Simple as that.

There is a possible confusion where people may think that they have observed "this specific thing happened" while actually they observed "any thing from some group of things happened", which is the technicolor and rare event cases are about.

Suppose a simple experiment where the experimenter flips a fair coin and you have to guess if Tails or Heads, but you are only rewarded for the correct decision if the coin comes up Tails. Then, of course, you should still entertain unconditional probabilities P(Heads)=P(Tails)=1/2. But this uncertainty is completely irrelevant to your decision. 

Here you are confusing probability and utility. The fact that P(Heads)=P(Tails)=1/2 is very much relevant to our decision making! The correct reasoning goes like this:

P(Heads) = 1/2

P(Tails) = 1/2

U(Heads) = 0

U(Tails) = X,

E(Tails) = P(Tails)U(Tails) - P(Heads)U(Heads) = 1/2X - 0

Solving E(Tails) = 0 for X:

X = 0

Which means that you shouldn't bet on Heads at any odds

What is relevant, however, is P(Tails/Tails)=1 and P(Heads/Tails)=0, concluding you should follow the strategy always guessing Tails. 

And why did you happen to decide that it's P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of

P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are "relevant" for you decision making? 

You seem to just decide the "relevance" of probabilities post hoc, after you've already calculated the correct answer the proper way. I don't think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn't need utilities as a concept. 

Another way to arrive at this strategy is to calculate expected utilities setting U(Heads)=0 as you would propose. But this is not the only reasonable solution. It’s just a different route of reasoning to take into account the experimental condition that your decision counts only if the coin lands Tails.

This is not "another way". This is the right way. It has the proper formalization and actually allows us to arrive to the correct answer even if we do not yet know it.

If the optimal betting sheme requires you to rely on P(Heads/Red or Blue)=1/2 when receiving evidence Blue, then the betting sheme demands you to ignore your total evidence.

You do not "ignore your total evidence" - you are never supposed to do that. It's just that you didn't actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn't even include "Blue" only "Blue or Red".

But I suppose it's better to go to the comment section Another Non-Anthropic Paradox for this particular crux.

„And why did you happen to decide that it's P(Tails|Tails) = 1 and P(Heads|Tails) = 0 instead of P(Heads|Heads) = 1 and P(Tails|Heads) = 0 which are "relevant" for you decision making?  You seem to just decide the "relevance" of probabilities post hoc, after you've already calculated the correct answer the proper way. I don't think you can formalize this line of thinking, so that you had a way to systematically correctly solve decision theory problems, which you do not yet know the answer to. Otherwise, we wouldn't need utilities as a concept.“

No, it‘s not post hoc. The simple rule to follow is: If a certain value x of a random variable X is relevant to your decision, then base your decision on the probability of x conditional on all conditions that are known to be satisfied when your decision is actually linked to the consequences of interest. And this is P(x/Tails) and not P(x/Heads) in case of guessing X is only rewarded if X=Tails.

Of course, the rule can‘t guarantee you correct answers, since the correctness of your decision does not only depend on the proper application of the rule but also on the quality of your probability model. However, notice that this feature could be used to test a probability model. For example, David Lewis model of the original Sleeping Beauty experiment says P(Heads/Monday)=2/3 resulting in bad betting decisions in case the bet only counts on Monday and applying the rule. Thus, there must be something wrong either with the rule or with to model. Since the logic of the rule seems valid to me, it leads me to dismiss Lewis model.

„You do not "ignore your total evidence" - you are never supposed to do that. It's just that you didn't actually receive the evidence in the first place. You can observe the fact that the room is blue in the experiment only if you put your mind in a state where you distinguish blue in particular. Until then your event space doesn't even include "Blue" only "Blue or Red". But I suppose it's better to go to the comment section Another Non-Anthropic Paradox for this particular crux“

I‘ve read your latest reply on this topic and I generally agree with it. As I already wrote, it is absolutely possible to create an event space that models a state of mind that is biased towards perceiving certain events (e.g. red) while neglecting others (e.g. blue). However, I find it difficult to understand how adopting such an event space that excludes an event that is relevant evidence according to your model, is not ignoring total evidence. This seems to me as if you were arguing that you don‘t ignore something because you are biased to ignore it. Or are you just saying that I was referring to the wrong mental concept, since we can only ignore what we actually do observe? Well, from my psychologist point of view, I highly doubt that simply precommitting to red is a sufficient condition to reliably prevent the human brain from classifying the perception of blue as the event „blue room“ instead of merely “a colored room (red or blue)“. I guess, most people would still subjectively experiencing themselves in a blue room.

Apart from that, is the concept of total evidence really limited to evidence that is actually observed or does it rather refer to all evidence accessible to the agent, including evidence through further investigating, reflecting, reasoning and inference beyond direct observation? Though if the evidence „blue room“ was not initially observed by the agent due to some strong, biased mindset, the evidence would be still accessible to him and could therefore be considered part of his total evidence as long the agent is able to break the mindset. At the end, the experiment could be modified in a way that Beauty‘s memory about her precommittment on Sunday is erased while sleeping and brought back into her mind again by the experimenter after awoken and seeing the room. In this case, she has already observed a particular color before her Sunday mindset, which could have prevented this, is „reactivated“.

mathematically sound

*ethically

Utility Instability under Thirdism

Works against Thirdism in the Fissure experiment too.

Technicolor Sleeping Beauty

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

*ethically

No, I'm not making any claims about ethics here, just math.

Works against Thirdism in the Fissure experiment too.

Yep, because it's wrong in Fissure as well. But I'll be talking about it later.

I mean, if you are going to precommit to the right strategy anyway, why do you even need probability theory? 

To understand whether you should precommit to any stratagy and, if you should, then which one. The fact that 

P(Heads|Blue) = P(Heads|Red) = 1/3

but

P(Heads|Blue or Red) = 1/2

means, that you may precommit to either Blue or Red and it doesn't matter which, but if you don't precommit, you won't be able to guess Tails better than chance per experiment.

The whole question is how do you decide to ignore that P(Head|Blue) = 1/3, when you chose Red and see Blue. And how is it not "a probabilistic model produces incorrect betting odds", when you need to precommit to ignore it?

You do not ignore it. When you choose red and see that the walls are blue you do not observe event "Blue". You observe outcome "Blue" which correspond to event "Blue or Red". Because the sigma-algebra of you probability space is affected by your precommitment.

You observe outcome “Blue” which correspond to event “Blue or Red”.

So you bet 1:1 on Red after observing this “Blue or Red”?

Yes! There is 50% chance that the coin is Tails and so the room is to be Red in this experiment.

No, I mean the Beauty awakes, sees Blue, gets a proposal to bet on Red with 1:1 odds, and you recommend accepting this bet?

Yes, if the bet is about whether the room takes the color Red in this experiment. Which is what event "Red" means in Technicolor Sleeping Beauty according to the correct model. The fact that you do not observe event Red in this awakening doesn't mean that you don't observe it in the experiment as a whole.

The situation is somewhat resembling learning that today is Monday and still being ready to bet at 1:1 that Tuesday awakening will happen in this experiment. Though, with colors there is actually an update from 3/4 to 1/2.

What you, probably, tried to ask, is whether you should agree to bet at 1:1 odds that the room is Red in this particular awakening after you wake up and saw that the room is Blue. And the answer is no, you shouldn't. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

And the answer is no, you shouldn’t. But probability space for Technicolor Sleeping beauty is not talking about probabilities of events happening in this awakening, because most of them are illdefined for reasons explained in the previous post.

So probability theory can't possibly answer whether I should take free money, got it.

And even if "Blue" is "Blue happens during experiment", you wouldn't accept worse odds than 1:1 for Blue, even when you see Blue?

So probability theory can't possibly answer whether I should take free money, got it.

No, that's not what I said. You just need to use a different probability space with a different event - "observing Red in any particular day of the experiment".

You can do this because for every day probability to observe the color is the same. Unlike, say, Tails in the initial coin toss which probability is 1/2 on Monday and 1 on Tuesday.

It's indeed a curious thing which I wasn't thinking about, because you can arrive to the correct betting odds on the color of the room for any day, using the correct model for technicolor sleeping beauty. As P(Red)=P(Blue) and rewards are mutually exclusive, U(Red)=U(Blue) and therefore 1:1 odds. But this was sloppy of me, because to formally update when you observe the outcome you still need an appropriate separate probability space, even if the update is trivial.

So thank you for bringing it up to my attention and, I'm going to talk more about it in a future post.

Sleeping Beauty is an edge case where different reward structures are intuitively possible, and so people imagine different game payout structures behind the definition of “probability”. Once the payout structure is fixed, the confusion is gone. With a fixed payout structure&preference framework rewarding the number you output as “probability”, people don’t have a disagreement about what is the best number to output. Sleeping beauty is about definitions.)

And still, I see posts arguing that if a tree falls on a deaf Sleeping Beauty, in a forest with no one to hear it, it surely doesn’t produce a sound, because here’s how humans perceive sounds, which is the definition of a sound, and there are demonstrably no humans around the tree. (Or maybe that it surely produces the sound because here’s the physics of the sound waves, and the tree surely abides by the laws of physics, and there are demonstrably sound waves.)

This is arguing about definitions. You feel strongly that “probability” is that thing that triggers the “probability” concept neuron in your brain. If people have a different concept triggering “this is probability”, you feel like they must be wrong, because they’re pointing at something they say is a sound and you say isn’t.

Probability is something defined in math by necessity. There’s only one way to do it to not get exploited in natural betting schemes/reward structures that everyone accepts when there are no anthropics involved. But if there are multiple copies of the agent, there’s no longer a single possible betting scheme defining a single possible “probability”, and people draw the boundary/generalise differently in this situation.

You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".

To be frank, it feels as if you didn't read any of my posts on Sleeping Beauty before writing this comment. That you are simply annoyed when people arguing about substantionless semantics - and, believe me, I sympathise enourmously! - assume that I'm doing the same, based on shallow pattern matching "talks about Sleeping Beauty -> semantic disagreement" and spill your annoyance at me, without validating whether your previous assumption is actually correct.

Which is a shame, because I've designed this whole series of posts with people like you in mind. Someone who starts from the assumption that there are two valid answers, because it was the assumption I myself used to be quite sympathetic to until I actually went forth and checked. 

If it's indeed the case, please start here and then I'd appreciate if you actually engaged with the points I made, because that post addresses the kind of criticism you are making here. 

If you actually read all my Sleeping Beauty posts, saw me highlight the very specific mathematical disagreements between halfers and thirders and how utterly ungrounded the idea of using probability theory with "centred possible words" is, I don't really understand how this kind of appealing to both sides still having a point can be a valid response. 

Anyway, I'm going to address you comment step by step.

Sleeping Beauty is an edge case where different reward structures are intuitively possible

Different reward structures are possible in any probability theory problem. Make a bet on a coin toss but if the outcome is Tails - this bet is repeated three times and if it's Heads you get punched in the face - is a completely possible reward structure for a simple coin toss problem. Is it not very intuitive? Granted, but this is besides the point. Mathematical rules are supposed to always work, even in non-intuitive cases.

Once the payout structure is fixed, the confusion is gone.

People should agree on which bets to make - this is true and this is exactly what I show in the first part of this post. But the mathematical concept of "probability" is not just about bets - which I talk about in the middle part of this post. A huge part of the confusion is still very much present. Or so it was, until I actually resolved it in the previous post.

Sleeping beauty is about definitions.

There definetely is a semantic component in the disagreement betwen halfers and thirders. But it's the least interesting one and that's why I'm postponing the talk about it until the next post.

The thing, you seem to be missing, is that there is also a real objective disagreement which is obfuscated by the semantic one. People noticed that halfers and thirders use different definitions and come to the conclusion that semantics is all there is and decided not to look further. But they totally should have.

My last two posts are talking about this objective matters disagreements. Is there an update on awakening or is there not? There is a disagreement about it even between thirders who, apparently agree on the definition of "probability". Are the ways halfers and thirders define probability formally correct? It's a strictly defined mathematical concept, mind you, not some similarity cluster category border like "sound". Are Tails&Monday and Tails&Tuesday mutually exclusive events? You can't just define mutual exclusivity however you like.

Probability is something defined in math by necessity.

Probability is a measure function over an event space. And if for some mathematical reasons you can't construct an event space, your "probability" is illdefined.

You all should just call these two probabilities two different words instead of arguing which one is the correct definition for "probability".

I'm doing both. I've shown that only one thing formally is probability, and in the next post I'm going to define the other thing and explore it's properties.

I read the beginning and skimmed through the rest of the linked post. It is what I expected it to be.

We are talking about "probability" - a mathematical concept with a quite precise definition. How come we still have ambiguity about it?

Reading E.T. Jayne’s might help.

Probability is what you get as a result of some natural desiderata related to payoff structures. When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.

This is surprising to me. Are you up to a a more detailed discussion? What do you think about the statistical analysis and the debunk of centred possible worlds? I haven't seen these points being raised or addressed before and they are definitely not about semantics. The fact that sequential events are not mutually exclusive can be formally proven. It's not a matter of perspective at all! We could use the dialogues feature, if you'd like.

Probability is what you get as a result of some natural desiderata related to payoff structures. 

This is a vague gesture to a similarity cluster and not an actual definition. Remove fancy words and you end up with "Probability has something to do with betting". Yes it does. In this post I even specify exactly what it does. You don't need to read E.T. Jayne’s to discover this revelation. The definition of expected utility is much more helpful.

When anthropics are involved, there are multiple ways to extend the desiderata, that produce different numbers that you should say, depending on what you get paid for/what you care about, and accordingly different math. 

There are always multiple ways to "extend the desiderata". But more importantly, you don't have to say different probability estimates depending on what you get paid for/what you care about. This is the exact kind of nonsense that I'm calling out in this post. Probabilities are about what evidence you have. Utilities are about what you care about. You don't need to use thirder probabilities for per awakening betting. Do you disagree with me here?

When there’s only a single copy of you, there’s only one kind of function, and everyone agrees on a function and then strictly defines it. When there are multiple copies of you, there are multiple possible ways you can be paid for having a number that represents something about the reality, and different generalisations of probability are possible.

How is it different from talking about probability of a specific person to observe an event and probability of any person from a group to observe an event? The fact that people from the group are exact copies doesn't suddenly makes anthropics a separate magisteria.

Moreover, there are no independent copies in Sleeping Beauty. On Tails, there are two sequential time states. The fact that people are trying to make a sample space out of them directly contradicts its definition.

When we are talking just about betting, one can always come up with its own functions, it's own way to separate expected utility of an event into "utility" and "probability". But then their "utilities" will be constantly shifting due to receiving new evidence and "probabilities" will occasionally ignore new evidence, and shift for other reasons. And pointing at this kind of weird behavior is a completely reasonable reaction. Can a person still use such definitions consistently? Sure. But this is not a way to carve reality by its joints. And I'm not just talking about betting. I specifically wrote a whole post about fundamental mathematical reasons, before starting talking about it.