Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Xianda_GAO 30 July 2017 04:05:15PM 0 points [-]

I will just post the relationship between perspective reasoning and simulation argument here.

In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.

The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.

However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn't reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.

The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1/4 to 1/2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.

In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.

Comment author: Manfred 30 July 2017 04:09:10AM *  0 points [-]

According to SSA beauty should update credence of H to 2/3 after learning it is Monday.

I always forget what the acronyms are. But the probability of H is 1/2 after learning it's Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it's Monday and thinks it's not a 50/50 flip, her probability assignment is bad.

Comment author: Xianda_GAO 30 July 2017 04:04:40PM 0 points [-]

Yes, that's why I think to this day Elga's counter argument is still the best.

Comment author: Manfred 28 July 2017 07:46:47PM *  0 points [-]

Sorry for the slow reply.

The 8 rooms are definitely the unbiased sample (of your rooms with one red room subtracted).

I think you are making two mistakes:

First, I think you're too focused on the nice properties of an unbiased sample. You can take an unbiased sample all you want, but if we know information in addition to the sample, our best estimate might not be the average of the sample! Suppose we have two urns, urn A has 10 red balls and 10 blue balls, while urn B has 5 red balls and 15 blue balls. We choose an urn by rolling a die, such that we have a 5/6 chance of choosing urn A and a 1/6 chance of choosing urn B. Then we take a fair, unbiased sample of 4 balls from whatever urn we chose. Suppose we draw out 1 red ball and 3 blue balls. Since this is an unbiased sample, does the process that you are calling "statistical analysis" have to estimate that we were drawing from urn B?

Second, you are trying too hard to make everything about the rooms. It's like someone was doing the problem with two urns from the previous paragraph, but tried to mathematically arrive at the answer only as a function of the number of red balls drawn, without making any reference to the process that causes them to draw from urn A vs. urn B. And they come up with several different ideas about what the function could be, and they call those functions "the Two-Thirds-B-er method" and "the Four-Tenths-B-er method." When really, both methods are incomplete because they fail to take into account what we know about how we picked the urn to draw from.

To answer the last part of your statement. If beauty randomly opens 8 doors and found them all red then she has a sample of pure red. By simple statistics she should give R=81 as the estimation. Halfer and thirders would both agree on that. If they do a bayesian analysis R=81 would also be the case with the highest probability. I'm not sure where 75 comes from I'm assuming by summing the multiples of probability and Rs in the bayesian analysis? But that value does not correspond to the estimation in statistics. Imagine you randomly draw 20 beans from a bag and they are all red, using statistics obviously you are not going to estimate the bag contains 90% red bean.

Think of it like this: if Beauty opens 8 doors and they're all red, and then she goes to open a ninth door, how likely should she think it is to be red? 100%, or something smaller than 100%? For predictions, we use the average of a probability distribution, not just its highest point.

Comment author: Xianda_GAO 29 July 2017 03:30:04PM *  0 points [-]

No problem, always good to have a discussion with someone serious about the subject matter.

First of all, you are right: statistic estimation and expected value in bayesian analysis are different. But that is not what I'm saying. What I'm saying is in a bayesian analysis with an uninformed prior (uniform) the case with highest probability should be the unbiased statistic estimation (it is not always so because round offs etc).

In the two urns example, I think what you meant is that using the sample of 4 balls a fair estimation would be 5 reds and 15 blues as in the case of B but bayesian analysis would give A as more likely? However this disagreement is due to the use of an informed prior, that you already know we are more likely to draw from A right from the beginning. Without knowing this bayesian would give B as the most likely case, same as statistic estimate.

Think of it like this: if Beauty opens 8 doors and they're all red, and then she goes to open a ninth door, how likely should she think it is to be red? 100%, or something smaller than 100%? For predictions, we use the average of a probability distribution, not just its highest point.

Definitely something smaller than 100%. Just because beauty thinks r=81 is the most likely case doesn't mean she think it is the only case. But that is not what the estimation is about. Maybe this question would be more relevant: If after opening 8 doors and they are all red and beauty have to guess R. what number should she guess (to be most likely correct)?

Comment author: Manfred 28 July 2017 10:19:23PM *  0 points [-]

He proposes the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. A simple calculation tells us his credence of H must be 1/3. As SSA dictates this is also beauty’s answer. Now beauty is predicting a fair coin toss yet to happen would most likely land on T. This supernatural predicting power is a conclusive evidence against SSA.

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

Sometimes people absolutely will come to different conclusions. And I think you're part of the way there with the idea of letting people talk to see if they converge. But I think you'll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where "average" means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.

I don't think this is what probabilities mean, or that it's the most elegant way to find probabilities, but I think it's a pretty solid and non-confusing way. And there's a quite nice discussion article about it somewhere on this site that I can't find, sadly.

Comment author: Xianda_GAO 29 July 2017 02:39:05PM 0 points [-]

Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

I think Elga's argument is beauty's credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2/3 after learning it is Monday. If you think beauty shall give 1/2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.

Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty's break even odd is at 1/2 while the selector's is at 1/3, which agrees with there credence.

Comment author: simon 27 July 2017 09:48:55AM *  0 points [-]

Thanks for the kind words.

However, I don't agree. The additional 8 rooms is an unbiased sample of the remaining 80 rooms for beauty. The additional 8 rooms is only an unbiased sample of the full set of 81 rooms for beauty if the first room is also an unbiased sample (but I would not consider it a sample but part of the prior).

Actually I found a better argument against your original anti-thirder argument, regardless of where the prior/posterior line is drawn:

Imagine that the selector happened to encounter a red room first, before checking out the other 8 rooms. At this point in time, the selector's state of knowledge about the rooms, regardless of what you consider prior and what posterior, is in the same position as beauty's after she wakes up. (from the thirder perspective, which I generally agree with in this case). Then they both sample 8 more rooms. The selector considers this an unbiased sample of the remaining 80 rooms. After both have taken this additional sample of 8, they again agree. Since they still agree, beauty must also consider the 8 rooms to be an unbiased sample of the remaining 80 rooms. Beauty's reasoning and the selector's are the same regarding the additional 8 rooms, and Beauty has no more "supernatural predicting power" than the selector.

About only thirding getting the attention: my apologies for contributing to this asymetry. For me, the issue is, I found the perspectivism posts at least initially hard to understand, and since subjectively I feel I already know the correct way to handle this sort of problem, that reduces my motivation to persevere and figure out what you are saying. I'll try to get around to carefully reading them and providing some response eventually (no time right now).

Comment author: Xianda_GAO 27 July 2017 04:58:15PM *  0 points [-]

Ok, I should have use my words more carefully. We meant the same thing. When I say beauty think the 8 rooms are unbiased sample I meant what I listed as C: It is an unbiased for the other 80 rooms. So yes to what you said, sorry for the confusion. it is obvious because it is a simple random sample chosen from the 80 rooms. So that part there is no disagreement. The disagreement between the two is about whether or not the 9 rooms are an unbiased sample. Beauty as a thirder should not think it is unbiased but bases her estimation on it anyway to answer the question from the selector's perspective. If she does not answer from selector's perspective she would use the 8 rooms to estimate the reds in the other 80 rooms and then add her own room in, as halfers does.

Regarding the selector chooses a room and finds out it is red. Again they agree on whether or not the 8 rooms are unbiased, however because the first room is always red for beauty but not so for the selector they see the 9 rooms differently. From beauty's perspective dividing the 9 rooms into 2 parts and she gets a unbiased sample (8 rooms) and a red room. It is not so for the selector. We can list the three points from the selector's perspective and it poses no problem at all.

A: the 9 room is an unbiased sample for 81 rooms

B: the first room is randomly selected from all rooms

C: the other 8 rooms is an unbiased sample for other 80 rooms.

alternatively we can divid the 9 rooms as follows:

A: the 9 rooms is an unbiased sample for 81 rooms

B: the first red room he saw (if he saw one) is always red

C: the other 8 rooms in the sample is biased towards blue

Either way there is no problem. In terms of the predicting power, think of it this way. Once the selector sees a red room he knows if he ignore it and only consider the other 8 rooms then the sample is biased towards blue, nothing supernatural. However, for beauty if she thinks the 9 rooms are unbiased then the 8 rooms she chooses must be biased even though they are selected at random. Hence the "supernatural". It is meant to point out for beauty the 9 and 8 rooms cannot be unbiased at same time. Since you already acknowledged the 9 rooms is biased (for her perspective at least), then yes she does not have supernatural predicting power of course.

I guess the bottomline is because they acquire their information differently, the selector and thirder beauty must disagree somewhere. Either on the numerical value of estimate, or on if a sample is biased.

About the perspectivism posts. The concept is actually quite simple: each beauty only counts what she experienced/remembered. But I feel maybe I'm not doing a good job explaining it. Anyway, thank you for promising to check it out.

Comment author: simon 26 July 2017 01:36:34AM *  0 points [-]

Well argued, you've convinced me that most people would probably define what's prior and what's posterior the way you say. Nonetheless, I don't agree that what's prior and what's posterior should be defined the way you say. I see this sort of info as better thought of as a prior (precisely because waking up shouldn't be thought of as new info) [edit: clarification below]. I don't regard the mere fact that the brain instantiating the mind having this info is physically continuous with an earlier-in-time brain instantiating a mind with different info as sufficient to not make it better thought of as a prior.

Some clarification on my actual beliefs here: I'm not a conventional thirder believing in the conventional SIA. I prefer, let's call it, "instrumental epistemic rationality". I weight observers, not necessarily equally, but according to how much I care about the accuracy of the relevant beliefs of that potential observer. If I care equally about the beliefs of the different potential observers, then this reduces to SIA. But there are many circumstances where one would not care equally, e.g. one is in a simulation and another is not, or one is a Boltzmann brain and another is not.

Now, I generally think that thirdism is correct, because I think that, given the problem definition, for most purposes it's more reasonable to value the correctness of the observers equally in a sleeping beauty type problem. E.g. if Omega is going to bet with each observer, and beauty's future self collects the sum of the earnings of both observers in the case there are two of them, then 1/3 is correct. But if e.g. the first instance of the two observer case is valued at zero, or if for some bizarre reason you care equally about the average of the correctness of the observers in each universe regardless of differences in numbers, then 1/2 is correct.

Now, I'll deal with your last paragraph from my perspective, The first room isn't a sample, it's guaranteed red. If you do regard it as a sample, it's biased in the red direction (maximally) and so should have zero weight. The prior is that the probability of R is proportional to R. The other 8 rooms are an unbiased sample of the remaining rooms. The set of 9 rooms is a biased sample (biased in the red direction) such that it provides the same information as the set of 8 rooms. So use the red-biased prior and the unbiased (out of the remaining rooms after the first room is removed) 8 room sample to get the posterior esimate. This will result in the same answer the selector gets, because you can imagine the selector found a red room first and then break down the selector's information into that first sample and a second unbiased sample of 8 of the remaining rooms.

Edit: I didn't explain my concept of prior v. posterior clearly. To me, it's conceptual not time-based in nature. For a set problem like this, what someone knows from the problem definition, from the point of view of their position in the problem, is the prior. What they then observe leads to the posterior. Here, waking sleeping beauty learns nothing on waking up that she does not know from the problem definition, given that she is waking up in the problem. So her beliefs at this point are the prior. Of course, her beliefs are different from sleeping beauty before she went to sleep, due to the new info. That new info told her she is within the problem, when she wasn't before, so she updated her beliefs to new beliefs which would be a posterior belief outside the context of the problem, but within the context of the problem constitute her prior.

Comment author: Xianda_GAO 26 July 2017 06:49:18PM 1 point [-]

Very clear argument and many good points. Appreciate the effort.

Regarding your position on thirders vs halfers, I think it is a completely reasonable position and I agree with the analysis about when halfers are correct and when thirders are correct. However to me it seems to treat Sleeping Beauty more as a decision making problem rather than a probability problem. Maybe one's credence without relating consequences is not defined. However that seems counter intuitive to me. Naturally one should have a belief about the situation and her decisions should depend on it as well as her objective (how much beauty cases about other copies) and the payoff structure (is the money reward depends only on her own answer, or all correct answers or accuracy rate etc). If that's the case, there should exist a unique correct answer to the problem.

About how should beauty estimate R and treat the samples, I would say that's the best position for a thirder to take. In fact that's the same position I would take too. If I may reword it slightly, see if you agrees with this version: The 8 rooms is a unbiased sample for beauty, that is too obvious to argue otherwise. Her own room is always red so the 9 rooms is obviously biased for her. However from (an imaginary) selector's perspective if he finds the same 9 rooms it is an unbiased sample. Thirders think she should answer from the selector's perspective, (I think the most likely reason being she is repeatedly memory wiped makes her perspective somewhat "compromised") therefore she would estimate R to be 27. Is this version something you would agree?

In this version I highlighted the disagreement between the selector and beauty, the disagreement is not some numerical value but they disagree on whether a sample is biased. In my 4 posts all I'm trying to do is arguing for the validity and importance of perspective disagreement. If we recognize the existence of this disagreement and let each agent answers from her own perspective we get another system of reasoning different from SIA or SSA. It provides an argument for double halving, give a framework where frequentist and bayesians agrees with each other, reject Doomsday Argument, disagree with Presumptuous Philosopher, and rejects the Simulation Argument. I genuinely think this is the explanation to sleeping beauty problem as well as many problems related to anthropic reasoning. Sadly only the part arguing against thirding gets some attention.

Anyways, I digressed. Bottomline is, though I do no think it is the best position, I feel your argument is reasonable and well thought. I can understand it if people want to take it as their position.

Comment author: cousin_it 26 July 2017 08:42:42AM 0 points [-]

Interesting. I guess the right question is, if you insist on a frequentist argument, how simple can you make it? Like I said, I don't expect things like unbiased estimates to behave intuitively. Can you make the argument about long run frequencies only? That would go a long way in convincing me that you found a genuine contradiction.

Comment author: Xianda_GAO 26 July 2017 04:52:26PM 0 points [-]

Yes, I have given a long run frequency argument for halving in part I. Sadly that part have not gotten any attention. My entire argument is about the importance of perspective disagreement in SBP. This counter argument is actually the less important part.

Comment author: cousin_it 25 July 2017 02:31:28PM *  0 points [-]

The 0 isn't a prediction of the next coin toss, it's an unbiased estimate of the coin parameter which is guaranteed to lie between 1/3 and 2/3. That's the problem! Depending on the randomness in the sample, an unbiased estimate of unknown parameter X could be smaller or larger than literally all possible values of X. Since in the post you use unbiased estimates and expect them to behave reasonably, I thought this example would be relevant.

Hopefully that makes it clearer why Bayesians wouldn't agree that frequentism+halfism is coherent. They think frequentism is incoherent enough on its own :-)

Comment author: Xianda_GAO 26 July 2017 01:25:43AM 0 points [-]

OK, I misunderstood. I interpreted the coin is biased 1/3 to 2/3 but we don't know which side it favours. If we start from uniform (1/2 to H and 1/2 to T), then the maximum likelihood is Tails.

Unless I misunderstood again, you mean there is a coin we want to guess its natural chance (forgive me if I'm misusing terms here). We do know its chance is bounded between 1/3 and 2/3. In this case yes, the statistical estimate is 0 while the maximum likelihood is 1/3. However it is obviously due to the use of a informed prior (that we know it is between 1/3 and 2/3). Hardly a surprise.

Also I want to point out in your previous example you said SIA+frequentist never had any strong defenders. That is not true. Until now in literatures thirding are generally considered to be a better fit for frequentist than halving. Because long run frequency of Tail awakening is twice as many as Head awakenings. Such arguments are used by published academics including Elga. Therefore I would consider my attack from the frequentist angle has some value.

Sleeping Beauty Problem Can Be Explained by Perspective Disagreement (IV)

1 Xianda_GAO 26 July 2017 01:01AM

This is the final part of my argument. Previous parts can be found here: I, II, III. To understand the following part I should be read at least. Here I would argue against SSA, argue why double-halving is correct, and touch on the implication of perspective disagreement on related topics such as the Doomsday Argument, the Presumptuous Philosopher and the Simulation Argument.

 

ARGUMENTS AGAINST SELF-SAMPLING ASSUMPTION

I think the most convincing argument against SSA was presented by Elga in 2000 (although he intended it as an counter to halving in general). He purpose the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. As SSA states, an observer should reason as if she is randomly selected from the set of all actual observers (past, present and future ones). If an imaginary selector randomly choose a day among all waking day(s) he is guaranteed to pick Monday if the coin landed on H but only has half the chance if T. From the selector’s perspective clearly a bayesian updating should be performed upon learning it is Monday. A simple calculation tells us his credence of H must be 2/3. As SSA dictates this is also beauty’s answer once she knows it is Monday. However the coin toss could potential happen after this awakening. Now beauty is predicting a fair coin toss yet to happen would most likely land on H. This supernatural predicting power is a conclusive evidence against SSA.


However, if we recognize the importance of perspective disagreement then beauty is not bound to give the same answer as the Selector. In fact I would argue she should not perform a bayesian update base on the new information. This can be explained in two ways.

 

One way is to put the new information into the frequentist approach mentioned in part I. In Duplicating Beauties, when a beauty wakes up and remembering 1000 repetitions she shall reason there are about 500 of H and T each among those 1000 tosses. The same conclusion would be reached by all beauties without knowing if she is physically the original or created somewhere along the way. Now suppose a beauty learns she is indeed the original. She would simply reason as the original beauty who wakes up remembering 1000 tosses. These 1000 tosses would still contain about 500 of H and T each. Meaning her answer shall remain at 1/2. 

 

Another way to see why beauty should not perform a bayesian update is to see the agreement/disagreement pattern between her and the selector. It is worth noting that beauty and the selector will be in agreement once knowing she is the original. As stated in part I, one way to understand the disagreement is after T, seeing either beauty is the same observation for the selector while it is different observations for beauties. This in turn causes the selector to enter twice more bets than beauty. However once we distinguish the two beauties by stating which one is the original the selector’s observation would also be different depending on which beauty he sees. To put it in a different way, if a bet is only set between the selector and the original beauty then the selector would no longer be twice more likely to enter a bet in case of T. He and the Beauty would enter the bet with equal chances. Meaning their betting odds ought to be the same, aka they must be in agreement regarding the credence of H. 

To be specific, the disagreements/agreements pattern can be summarized as follows. If the selector randomly chooses one of the two rooms as described by SIA. Upon seeing a beauty in the room the selector’s probability for H will change from 1/2 to 1/3. Beauty’s probability remains at 1/2. The two of them would be in disagreement. Once they learns the beauty is the original, the selector’s probability of H increases back to 1/2 from 1/3 by simple bayesian updating while beauty’s probability still remains at 1/2. This way the two would be in agreement. The selector can also randomly chooses one beauty from all existing beauty(ies) as described by SSA (here the total number of beauties should be shielded from the selector to not reveal the coin toss result). In this case seeing a beauty gives the selector no new information so his probability for H would remain unchanged at 1/2. On the other hand, from beauty’s perspective she is twice more likely to be chosen if there exist only one beauty instead of two. Therefore upon seeing the selector her credence of H would increase to 2/3. The two of them would also be in disagreement. Once they learn the beauty is the original the selector’s credence for H would increase to 2/3 by bayesian updating. Again beauty would not update her probability and it remains at 2/3. This way the two would agree with each other again.

As shown above, for the two to reach an agreement beauty must not perform a bayesian update upon the new information. This holds true in both cases regardless how the selection is structured. 

Beauty’s antibayesianism, just like the perspective disagreement, is quite unusual. I think this is due to the fact that there is no random event determining which beauty is the original/clone. While the coin toss may create a new copy of beauty, nothing could ever turn herself into the copy. The original beauty would eventually be the original beauty. It is simply tautology. There is no random soul jumping between the two bodies. Beauty’s uncertainty is because of the structure of the experiment which is purely due to lack of information. Compare this to the selector’s situation. The event of him choosing a room is random. Therefore learning the beauty in the chosen room is original gives new information about the random event. From his perspective a bayesian update should be performed.Where as from beauty's perspective, learning she is the original does not give new information about a random event, for the simply fact there is no random event to begin with. It only gives information about her own perspective. So she should not performing a bayesian update as the selector did. 

DISCUSSIONS

With the above arguments in mind we can clearly see the importance of perspective disagreement in SBP. It does not matter whether the selector follows SSA or SIA, his answer won't always correctly reflect beauty’s. Once beauty switch to selector’s perspective her answers would change. Therefore it is important for us to consciously track our reasoning process to make sure it contains no change of perspective. As shown above, learning she exists does not confirm scenarios with more observers, just as learning she is the first does not confirm scenarios with less observers. Applying the same logic means we should reject Doomsday Argument while disagree with the Presumptuous Philosopher. Not changing perspective also means an observer should never reason as if she is randomly selected (by an imaginary third party) from certain reference class. This also means Simulation Argument should be rejected. (which I will discuss separately)

Another point worth mentioning is that disagreements among beauties are also reasonable. Imagine a beauty has undergone a duplication coin toss and was told the result was T after waking up. She then takes another 9 iterations and woke up remembering 10 tosses in total. At this point she is told out of the two resulting beauties after the first toss, one experienced all Heads in the next 9 rounds. The other one and her clones experienced all Tails in those 9 rounds. Because there is no new information regarding which beauty she was after the first toss, beauty would conclude with 1/2 confidence she is the one experienced 9 heads, even though they are 512 Tail beauties and only 1 Head beauty. We can put all those beauties in the same room and let them communicate freely, since they have the same information none of them would change her answer. Now we have them disagreeing with each other. This disagreement might seems alarming. However it is also valid. As discussed above, in problems involving duplications people with the same information can have different probabilities. The reason this disagreement seems more suspicious is because the resulting beauties appears to be in symmetrical positions thus should not disagree with each other. However this symmetry is only valid from an selector’s perspective. For each beauty's own perspective she is more likely to be the Head beauty than a certain specific Tail beauty. A different problem which is similar to this disagreement was discussed by John Pittard(2013).

Comment author: cousin_it 25 July 2017 08:52:29AM *  0 points [-]

Mathematically, maximum likelihood and unbiased estimates are well defined, but Bayesians don't expect them to always agree with intuition.

For example, imagine you have a coin whose parameter is known to be between 1/3 and 2/3. After seeing one tails, an unbiased estimate of the coin's parameter is 0 (lower than all possible parameter values) and the maximum likelihood estimate is 1/3 (jumping to extremes after seeing a tiny bit of information). Bayesian expected values don't have such problems.

You can stop kicking the sand castle of frequentism+SIA, it never had strong defenders anyway. Bayes+SIA is the strong inconvenient position you should engage with.

Comment author: Xianda_GAO 25 July 2017 02:03:24PM *  0 points [-]

Maximum likelihood is indeed 0 or Tails, assuming we start from a uniform prior. 1/3 is the expected value. Ask yourself this, after seeing a tail what should you guess for the next toss result to have maximum likelihood of being correct?

If halfers reasoning applies to both Bayesian and Frequentist while SIA is only good in Bayesian isn't it quite alarming to say the least?

View more: Next