This is the final part of my argument. Previous parts can be found here: I, II, III. To understand the following part I should be read at least. Here I would argue against SSA, argue why double-halving is correct, and touch on the implication of perspective disagreement on related topics such as the Doomsday Argument, the Presumptuous Philosopher and the Simulation Argument.

 

ARGUMENTS AGAINST SELF-SAMPLING ASSUMPTION

I think the most convincing argument against SSA was presented by Elga in 2000 (although he intended it as an counter to halving in general). He purpose the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. As SSA states, an observer should reason as if she is randomly selected from the set of all actual observers (past, present and future ones). If an imaginary selector randomly choose a day among all waking day(s) he is guaranteed to pick Monday if the coin landed on H but only has half the chance if T. From the selector’s perspective clearly a bayesian updating should be performed upon learning it is Monday. A simple calculation tells us his credence of H must be 2/3. As SSA dictates this is also beauty’s answer once she knows it is Monday. However the coin toss could potential happen after this awakening. Now beauty is predicting a fair coin toss yet to happen would most likely land on H. This supernatural predicting power is a conclusive evidence against SSA.


However, if we recognize the importance of perspective disagreement then beauty is not bound to give the same answer as the Selector. In fact I would argue she should not perform a bayesian update base on the new information. This can be explained in two ways.

 

One way is to put the new information into the frequentist approach mentioned in part I. In Duplicating Beauties, when a beauty wakes up and remembering 1000 repetitions she shall reason there are about 500 of H and T each among those 1000 tosses. The same conclusion would be reached by all beauties without knowing if she is physically the original or created somewhere along the way. Now suppose a beauty learns she is indeed the original. She would simply reason as the original beauty who wakes up remembering 1000 tosses. These 1000 tosses would still contain about 500 of H and T each. Meaning her answer shall remain at 1/2. 

 

Another way to see why beauty should not perform a bayesian update is to see the agreement/disagreement pattern between her and the selector. It is worth noting that beauty and the selector will be in agreement once knowing she is the original. As stated in part I, one way to understand the disagreement is after T, seeing either beauty is the same observation for the selector while it is different observations for beauties. This in turn causes the selector to enter twice more bets than beauty. However once we distinguish the two beauties by stating which one is the original the selector’s observation would also be different depending on which beauty he sees. To put it in a different way, if a bet is only set between the selector and the original beauty then the selector would no longer be twice more likely to enter a bet in case of T. He and the Beauty would enter the bet with equal chances. Meaning their betting odds ought to be the same, aka they must be in agreement regarding the credence of H. 

To be specific, the disagreements/agreements pattern can be summarized as follows. If the selector randomly chooses one of the two rooms as described by SIA. Upon seeing a beauty in the room the selector’s probability for H will change from 1/2 to 1/3. Beauty’s probability remains at 1/2. The two of them would be in disagreement. Once they learns the beauty is the original, the selector’s probability of H increases back to 1/2 from 1/3 by simple bayesian updating while beauty’s probability still remains at 1/2. This way the two would be in agreement. The selector can also randomly chooses one beauty from all existing beauty(ies) as described by SSA (here the total number of beauties should be shielded from the selector to not reveal the coin toss result). In this case seeing a beauty gives the selector no new information so his probability for H would remain unchanged at 1/2. On the other hand, from beauty’s perspective she is twice more likely to be chosen if there exist only one beauty instead of two. Therefore upon seeing the selector her credence of H would increase to 2/3. The two of them would also be in disagreement. Once they learn the beauty is the original the selector’s credence for H would increase to 2/3 by bayesian updating. Again beauty would not update her probability and it remains at 2/3. This way the two would agree with each other again.

As shown above, for the two to reach an agreement beauty must not perform a bayesian update upon the new information. This holds true in both cases regardless how the selection is structured. 

Beauty’s antibayesianism, just like the perspective disagreement, is quite unusual. I think this is due to the fact that there is no random event determining which beauty is the original/clone. While the coin toss may create a new copy of beauty, nothing could ever turn herself into the copy. The original beauty would eventually be the original beauty. It is simply tautology. There is no random soul jumping between the two bodies. Beauty’s uncertainty is because of the structure of the experiment which is purely due to lack of information. Compare this to the selector’s situation. The event of him choosing a room is random. Therefore learning the beauty in the chosen room is original gives new information about the random event. From his perspective a bayesian update should be performed.Where as from beauty's perspective, learning she is the original does not give new information about a random event, for the simply fact there is no random event to begin with. It only gives information about her own perspective. So she should not performing a bayesian update as the selector did. 

DISCUSSIONS

With the above arguments in mind we can clearly see the importance of perspective disagreement in SBP. It does not matter whether the selector follows SSA or SIA, his answer won't always correctly reflect beauty’s. Once beauty switch to selector’s perspective her answers would change. Therefore it is important for us to consciously track our reasoning process to make sure it contains no change of perspective. As shown above, learning she exists does not confirm scenarios with more observers, just as learning she is the first does not confirm scenarios with less observers. Applying the same logic means we should reject Doomsday Argument while disagree with the Presumptuous Philosopher. Not changing perspective also means an observer should never reason as if she is randomly selected (by an imaginary third party) from certain reference class. This also means Simulation Argument should be rejected. (which I will discuss separately)

Another point worth mentioning is that disagreements among beauties are also reasonable. Imagine a beauty has undergone a duplication coin toss and was told the result was T after waking up. She then takes another 9 iterations and woke up remembering 10 tosses in total. At this point she is told out of the two resulting beauties after the first toss, one experienced all Heads in the next 9 rounds. The other one and her clones experienced all Tails in those 9 rounds. Because there is no new information regarding which beauty she was after the first toss, beauty would conclude with 1/2 confidence she is the one experienced 9 heads, even though they are 512 Tail beauties and only 1 Head beauty. We can put all those beauties in the same room and let them communicate freely, since they have the same information none of them would change her answer. Now we have them disagreeing with each other. This disagreement might seems alarming. However it is also valid. As discussed above, in problems involving duplications people with the same information can have different probabilities. The reason this disagreement seems more suspicious is because the resulting beauties appears to be in symmetrical positions thus should not disagree with each other. However this symmetry is only valid from an selector’s perspective. For each beauty's own perspective she is more likely to be the Head beauty than a certain specific Tail beauty. A different problem which is similar to this disagreement was discussed by John Pittard(2013).

New Comment
7 comments, sorted by Click to highlight new comments since:

I will just post the relationship between perspective reasoning and simulation argument here.

In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.

The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.

However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn't reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.

The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1/4 to 1/2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.

In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.

He proposes the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. A simple calculation tells us his credence of H must be 1/3. As SSA dictates this is also beauty’s answer. Now beauty is predicting a fair coin toss yet to happen would most likely land on T. This supernatural predicting power is a conclusive evidence against SSA.

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

Sometimes people absolutely will come to different conclusions. And I think you're part of the way there with the idea of letting people talk to see if they converge. But I think you'll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where "average" means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.

I don't think this is what probabilities mean, or that it's the most elegant way to find probabilities, but I think it's a pretty solid and non-confusing way. And there's a quite nice discussion article about it somewhere on this site that I can't find, sadly.

Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

I think Elga's argument is beauty's credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2/3 after learning it is Monday. If you think beauty shall give 1/2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.

Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty's break even odd is at 1/2 while the selector's is at 1/3, which agrees with there credence.

According to SSA beauty should update credence of H to 2/3 after learning it is Monday.

I always forget what the acronyms are. But the probability of H is 1/2 after learning it's Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it's Monday and thinks it's not a 50/50 flip, her probability assignment is bad.

Yes, that's why I think to this day Elga's counter argument is still the best.

exactly by the argument

I don't see any argument there.

To spell it out:

Beauty knows limiting frequency (which, when known, is equal to the probability) of the coin flips that she sees right in front of her will be equal to one-half. That is, if you repeat the experiment many times (plus a little noise to determine coin flips), then you get equal numbers of the event "Beauty sees a fair coin flip and it lands Heads" and "Beauty sees a fair coin flip and it lands Tails." Therefore Beauty assigns 50/50 odds to any coin flips she actually gets to see.

You can make an analogous argument from symmetry of information rather than limiting frequency, but it's less accessible and I don't expect people to think of it on their own. Basically, the only reason to assign thirder probabilities is if you're treating states of the world given your information as the basic mutually-exclusive-and-exhaustive building block of probability assignment. And the states look like Mon+Heads, Mon+Tails, and Tues+Tails. If you eliminate one of the possibilities, then the remaining two are symmetrical.

If it seems paradoxical that, upon waking up, she thinks the Monday coin is more likely to have landed tails, just remember that half of the time that coin landed tails, it's Tuesday and she never gets to see the Monday coin being flipped - as soon as she actually expects to see it flipped, that's a new piece of information that causes her to update her probabilities.