This is the final part of my argument. Previous parts can be found here: I, II, III. To understand the following part I should be read at least. Here I would argue against SSA, argue why double-halving is correct, and touch on the implication of perspective disagreement on related topics such as the Doomsday Argument, the Presumptuous Philosopher and the Simulation Argument.
ARGUMENTS AGAINST SELF-SAMPLING ASSUMPTION
I think the most convincing argument against SSA was presented by Elga in 2000 (although he intended it as an counter to halving in general). He purpose the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. As SSA states, an observer should reason as if she is randomly selected from the set of all actual observers (past, present and future ones). If an imaginary selector randomly choose a day among all waking day(s) he is guaranteed to pick Monday if the coin landed on H but only has half the chance if T. From the selector’s perspective clearly a bayesian updating should be performed upon learning it is Monday. A simple calculation tells us his credence of H must be 2/3. As SSA dictates this is also beauty’s answer once she knows it is Monday. However the coin toss could potential happen after this awakening. Now beauty is predicting a fair coin toss yet to happen would most likely land on H. This supernatural predicting power is a conclusive evidence against SSA.
However, if we recognize the importance of perspective disagreement then beauty is not bound to give the same answer as the Selector. In fact I would argue she should not perform a bayesian update base on the new information. This can be explained in two ways.
One way is to put the new information into the frequentist approach mentioned in part I. In Duplicating Beauties, when a beauty wakes up and remembering 1000 repetitions she shall reason there are about 500 of H and T each among those 1000 tosses. The same conclusion would be reached by all beauties without knowing if she is physically the original or created somewhere along the way. Now suppose a beauty learns she is indeed the original. She would simply reason as the original beauty who wakes up remembering 1000 tosses. These 1000 tosses would still contain about 500 of H and T each. Meaning her answer shall remain at 1/2.
Another way to see why beauty should not perform a bayesian update is to see the agreement/disagreement pattern between her and the selector. It is worth noting that beauty and the selector will be in agreement once knowing she is the original. As stated in part I, one way to understand the disagreement is after T, seeing either beauty is the same observation for the selector while it is different observations for beauties. This in turn causes the selector to enter twice more bets than beauty. However once we distinguish the two beauties by stating which one is the original the selector’s observation would also be different depending on which beauty he sees. To put it in a different way, if a bet is only set between the selector and the original beauty then the selector would no longer be twice more likely to enter a bet in case of T. He and the Beauty would enter the bet with equal chances. Meaning their betting odds ought to be the same, aka they must be in agreement regarding the credence of H.
To be specific, the disagreements/agreements pattern can be summarized as follows. If the selector randomly chooses one of the two rooms as described by SIA. Upon seeing a beauty in the room the selector’s probability for H will change from 1/2 to 1/3. Beauty’s probability remains at 1/2. The two of them would be in disagreement. Once they learns the beauty is the original, the selector’s probability of H increases back to 1/2 from 1/3 by simple bayesian updating while beauty’s probability still remains at 1/2. This way the two would be in agreement. The selector can also randomly chooses one beauty from all existing beauty(ies) as described by SSA (here the total number of beauties should be shielded from the selector to not reveal the coin toss result). In this case seeing a beauty gives the selector no new information so his probability for H would remain unchanged at 1/2. On the other hand, from beauty’s perspective she is twice more likely to be chosen if there exist only one beauty instead of two. Therefore upon seeing the selector her credence of H would increase to 2/3. The two of them would also be in disagreement. Once they learn the beauty is the original the selector’s credence for H would increase to 2/3 by bayesian updating. Again beauty would not update her probability and it remains at 2/3. This way the two would agree with each other again.
As shown above, for the two to reach an agreement beauty must not perform a bayesian update upon the new information. This holds true in both cases regardless how the selection is structured.
Beauty’s antibayesianism, just like the perspective disagreement, is quite unusual. I think this is due to the fact that there is no random event determining which beauty is the original/clone. While the coin toss may create a new copy of beauty, nothing could ever turn herself into the copy. The original beauty would eventually be the original beauty. It is simply tautology. There is no random soul jumping between the two bodies. Beauty’s uncertainty is because of the structure of the experiment which is purely due to lack of information. Compare this to the selector’s situation. The event of him choosing a room is random. Therefore learning the beauty in the chosen room is original gives new information about the random event. From his perspective a bayesian update should be performed.Where as from beauty's perspective, learning she is the original does not give new information about a random event, for the simply fact there is no random event to begin with. It only gives information about her own perspective. So she should not performing a bayesian update as the selector did.
So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.
Sometimes people absolutely will come to different conclusions. And I think you're part of the way there with the idea of letting people talk to see if they converge. But I think you'll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where "average" means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.
I don't think this is what probabilities mean, or that it's the most elegant way to find probabilities, but I think it's a pretty solid and non-confusing way. And there's a quite nice discussion article about it somewhere on this site that I can't find, sadly.
Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).
... (read more)