Comment author: 18 September 2017 12:42:59PM *  1 point [-]

I think the opposite: Doomsday argument (in one form of it) is an effective predictor in many common situations, and thus it also could be allied to the duration of human civilization. DA is not absurd: our expectations about human future are absurd.

For example, I could predict medium human life expectancy based on supposedly random my age. My age is several decades, and human life expectancy is 2 х (several decades) with 50 percent probability (and it is true).

Comment author: 18 September 2017 06:37:32PM 1 point [-]

The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.

Comment author: 16 September 2017 08:03:28PM 1 point [-]

I think that most discussions about Doomsday argument are biased in the way that author tries to disprove it.

Also, it looks like that in the multiverse all possible observers exist, so the mere fact of my existence is non-informational. However, I could ask if some of my properties are random or not, and could they be used for some predictions.

For example, my birthday month seems to be random. And if I know my birthday month, but don't know how many months are in the year, I could estimate that they are approximately 2 times of my birthday month rank. It works.

The problem appears when I apply the same logic to the future of human civilization, as I don't like the result.

Comment author: 17 September 2017 02:12:31AM 0 points [-]

The post specifically explained why your properties cannot be used for predictions in the context of doomsday argument and sleeping beauty problem. I would like to know your thoughts on that.

## Perspective Reasoning’s Counter to The Doomsday Argument

3 16 September 2017 07:39PM

To be honest I feel a bit frustrated that this is not getting much attention. I am obviously biased but I think this article is quite important. It points out the controversies surrounding the doomsday argument, simulation argument, boltzmann's brain, presumptuous philosopher,  sleeping beauty problem and many other aspects of anthropic reasoning is caused by the same thing: perspective inconsistency. If we keep the same perspective then the paradoxes and weird implications just goes away. I am not a academic so I have no easy channel for publication. That's why I am hoping this community can give some feedback. If you have half an hour to waste anyway why not give it a read? There's no harm in it.

Abstract:

From a first person perspective, a self-aware observer can inherently identify herself from other individuals. However, from a third person perspective this identity through introspection does not apply. On the other hand, because an observer’s own existence is a prerequisite for her reasoning she would always conclude she exists from a first person perspective. This means observers have to take a third person perspective to meaningfully contemplate her chance of not coming into existence. Combining the above points suggests arguments which utilize identity through introspection and information about one’s chance of existence fails by not keeping a consistent perspective. This helps explaining questions such as doomsday argument and sleeping beauty problem. Furthermore, it highlights the problem with anthropic reasonings such as self-sampling assumption and self-indication assumption.

Any observer capable of introspection is able to recognize herself as a separate entity from the rest of the world. Therefore a person can inherently identify herself from other people. However, due to the first-person nature of introspection it cannot be used to identify anybody else. This means from a third-person perspective each individual has to be identified by other means. For ordinary problems this difference between first- and third-person reasoning bears no significance so we can arbitrarily switch perspectives without affecting the conclusion. However this is not always the case.

One notable difference between the perspectives is about the possibility of not existing. Because one’s existence is a prerequisite for her thinking, from a first person perspective an observer would always conclude she exists (cogito ergo sum). It is impossible to imagine what your experiences would be like if you don’t exist because it is self-contradictory. Therefore to envisage scenarios which oneself does not come into existence an observer must take a third person perspective. Consequently any information about her chances of coming into existence is only relevant from a third-person perspective.

Now with the above points in mind let’s consider the following problem as a model for the doomsday argument (taken from Katja Grace’s Anthropic Reasoning in the Great Filter):

God’s Coin Toss

Suppose God tosses a fair coin. If it lands on heads, he creates 10 people, each in their own room. If it lands on tails he creates 1000 people each in their own room. The people cannot see or communicate with the other rooms. Now suppose you wake up in a room and was told of the setup. How should you reason the coin fell? Should your reason change if you discover that you are in one of the first ten rooms?

The correct answer to this question is still disputed to this day. One position is that upon waking up you have learned nothing. Therefore you can only be 50% sure the coin landed on heads. After learning you are one of the first ten persons you ought to update to 99% sure the coin landed on heads. Because you would certainly be one of the first ten person if the coin landed on heads and only have 1% chance if tails. This approach follows the self-sampling assumption (SSA).

This answer initially reasons from a first-person perspective. Since from a first-person perspective finding yourself exist is a guaranteed observation it offers no information. You can only say the coin landed with an even chance at awakening. The mistake happens when it updates the probability after learning you are one of the first ten persons. Belonging to a group which would always be created means your chance of existence is one. As discussed above this new information is only relevant to third-person reasoning. It cannot be used to update the probability from first-person perspective. From a first person perspective since you are in one of the first ten rooms and know nothing outside this room you have no evidence about the total number of people. This means you still have to reason the coin landed with even chances.

Another approach to the question is that you should be 99% sure that the coin landed on tails upon waking up, since you have a much higher chance of being created if more people were created. And once learning you are in one of the first ten rooms you should only be 50% sure that the coin landed on heads. This approach follows the self-indication assumption (SIA).

This answer treats your creation as new information, which implies your existence is not guaranteed but by chance. That means it is reasoning from a third-person perspective. However your own identity is not inherent from this perspective. Therefore it is incorrect to say a particular individual or “I” was created, it is only possible to say an unidentified individual or “someone” was created. Again after learning you are one of the first ten people it is only possible to say “someone” from the first ten rooms was created. Since neither of these are new information the probability of heads should remains at 50%.

It doesn’t matter if one choose to think from first- or third-person perspective, if done correctly the conclusions are the same: the probability of coin toss remains at 50% after waking up and after learning you are in one of the first ten rooms. This is summarized in Figure 1.

Figure 1. Summary of Perspective Reasonings for God’s Coin Toss

The two traditional views wrongfully used both inherent self identification as well as information about chances of existence. This means they switched perspective somewhere while answering the question. For the self-sampling assumption (SSA) view, the switch happened upon learning you are one of the first ten people. For the self-indication assumption (SIA) view, the switch happened after your self identification immediately following the wake up. Due to these changes of perspective both methods require to defining oneself from a third-person perspective. Since your identity is in fact undefined from third-person perspective, both assumptions had to make up a generic process. As a result SSA states an observer shall reason as if she is randomly selected among all existent observers while SIA states an observer shall reason as if she is randomly selected from all potential observers. These methods are arbitrary and unimaginative. Neither selections is real and even if one actually took place it seems incredibly egocentric to assume you would be the chosen one. However they are necessary compromises for the traditional views.

One related question worth mentioning is after waking up one might ask “what is the probability that I am one of the first ten people?”. As before the answer is still up to debate since SIA and SSA gives different numbers. However, base on perspective reasoning, this probability is actually undefined. In that question “I” – an inherently self identified observer, is defined from the first-person perspective, whereas “one of the first ten people” – a group based on people’s chance of existence is only relevant from the third-person perspective. Due to this switch of perspective in the question it is unanswerable. To make the question meaningful either change the group to something relevant from first-person perspective or change the individual to someone identifiable from third-person perspective. Traditional approaches such as SSA and SIA did the latter by defining “I” in the third person. As mentioned before, this definition is entirely arbitrary. Effectively SSA and SIA are trying to solve two different modified versions of the question. While both calculations are correct under their assumptions, none of them gives the answer to the original question.

A counter argument would be an observer can identify herself in third-person by using some details irrelevant to the coin toss. For example, after waking up in the room you might find you have brown eyes, the room is a bit cold, dust in the air has certain pattern etc. You can define yourself by these characteristics. Then it can be said, from a third-person perspective, it is more likely for a person with such characteristics to exist if they are more persons created. This approach is following full non-indexical conditioning (FNC), first formulated by Professor Radford M.Neal in 2006. In my opinion the most perspicuous use of the idea is by Michael Titelbaum’s technicolor beauty example. Using this example he argued for a third position in the sleeping beauty problem.Therefore I will provide my counter argument while discussing the sleeping beauty problem.

The Sleeping Beauty Problem

You are going to take part in the following experiment. A scientist is going to put you to sleep. During the experiment you are going to be briefly woke up either once or twice depending the result of a random coin toss. If the coin landed on heads you would be woken up once, if tails twice. After each awakening your memory of the awakening would be erased. Now supposed you are awakened in the experiment, how confident should you be that the coin landed on heads? How should you change your mind after learning this is the first awakening?

The sleeping beauty problem has been a vigorously debated topic since 2000 when Adam Elga brought it to attention. Following self-indication assumption (SIA) one camp thinks the probability of heads should be 1/3 at wake up and 1/2 after learning it is the first awakening. On the other hand supporters of self-sampling assumption (SSA) think the probability of heads should be 1/2 at wake up and 2/3 after learning it is the first awakening.

Astute readers might already see the parallel between sleeping beauty problem and God’s coin toss problem. Indeed the cause of debate is exactly the same. If we apply perspective reasoning we get the same result – your probability should be 1/2 after waking up and remain at 1/2 after learning it is the first awakening. In first-person perspective you can inherently identify the current awakening from the (possible) other but cannot contemplate what happens if this awakening doesn’t exist. Whereas from third-person perspective you can imagine what happens if you are not awake but cannot justifiably identify this awakening. Therefore no matter from which perspective you chose to reason, the results are the same, aka double halfers are correct.

However, Titelbaum (2008) used the technicolor beauty example arguing for a thirder’s position. Suppose there are two pieces of paper one blue the other red. Before your first awakening the researcher randomly choose one of them and stick it on the wall. You would be able to see the paper’s color when awoke. After you fall back to sleep he would switch the paper so if you wakes up again you would see the opposite color. Now suppose after waking up you saw a piece of blue paper on the wall. You shall reason “there exist a blue awakening” which is more likely to happen if the coin landed tails. A bayesian update base on this information would give us the probability of head to be 1/3. If after waking up you see a piece of red paper you would reach the same conclusion due to symmetry. Since it is absurd to purpose technicolor beauty is fundamentally different from sleeping beauty problem they must have the same answer, aka thirders are correct.

Technicolor beauty is effectively identifying your current awakening from a third-person perspective by using a piece of information irrelevant to the coin toss. I purpose the use of irrelevant information is only justified if it affects the learning of relevant information. In most cases this means the identification must be done before an observation is made. The color of the paper, or any details you experienced after waking up does not satisfy this requirement thus cannot be used. This is best illustrated by an example.

Imagine you are visiting an island with a strange custom. Every family writes their number of children on the door. All children stays at home after sunset. Furthermore only boys are allowed to answer the door after dark. One night you knock on the door of a family with two children . Suppose a boy answered. What is the probability that both children of the family are boys? After talking to the boy you learnt he was born on a Thursday. Should you change the probability?

A family with two children is equally likely to have two boys, two girls, a boy and a girl or a girl and a boy. Seeing a boy eliminates the possibility of two girls. Therefore among the other cases both boys has a probability of 1/3. If you knock on the doors of 1000 families with two children about 750 would have a boy answering, out of which about 250 families would have two boys, consistent with the 1/3 answer. Applying the same logic as technicolor beauty, after talking to the boy you shall identify him specifically as “a boy born on Thursday” and reason “the family has a boy born on Thursday”. This statement is more likely to be true if both the children are boys. Without getting into the details of calculation, a bayesian update on this information would give the probability of two boys to be 13/27. Furthermore, it doesn’t matter which day he is actually born on. If the boy is born on, say, a Monday, we get the same answer by symmetry.

This reasoning is obviously wrong and answer should remain at 1/3. This can be checked by repeating the experiment by visiting many families with two children. Due to its length the calculations are omitted here. Interested readers are encouraged to check. 13/27 would be correct if the island’s custom is “only boys born on Thursday can answer the door”. In that case being born on a Thursday is a characteristic specified before your observation. It actually affects the chance of you learning relevant information about whether a boy exists. Only then you can justifiably identifying whom answering the door as “a boy born on Thursday”and reason “the family has a boy born on Thursday”. Since seeing the blue piece of paper happens after you waking up which does not affect your chance of awakening it cannot be used to identify you in a third-person perspective. Just as being born on Thursday cannot be used to identify the boy in the initial case.

On a related note, for the same reason using irrelevant information to identify you in the third-person perspective is justified in conventional probability problems. Because the identification happens before observation and the information learned varies depends one which person is specified. That’s why in general we can arbitrarily switch perspectives without changing the answer.

Comment author: 30 July 2017 04:05:15PM 0 points [-]

I will just post the relationship between perspective reasoning and simulation argument here.

In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.

The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.

However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn't reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.

The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1/4 to 1/2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.

In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.

Comment author: 30 July 2017 04:09:10AM *  0 points [-]

According to SSA beauty should update credence of H to 2/3 after learning it is Monday.

I always forget what the acronyms are. But the probability of H is 1/2 after learning it's Monday, any any method that says otherwise is wrong, exactly by the argument that you can flip the coin on monday right in front of SB, and if she knows it's Monday and thinks it's not a 50/50 flip, her probability assignment is bad.

Comment author: 30 July 2017 04:04:40PM 0 points [-]

Yes, that's why I think to this day Elga's counter argument is still the best.

Comment author: 28 July 2017 07:46:47PM *  0 points [-]

The 8 rooms are definitely the unbiased sample (of your rooms with one red room subtracted).

I think you are making two mistakes:

First, I think you're too focused on the nice properties of an unbiased sample. You can take an unbiased sample all you want, but if we know information in addition to the sample, our best estimate might not be the average of the sample! Suppose we have two urns, urn A has 10 red balls and 10 blue balls, while urn B has 5 red balls and 15 blue balls. We choose an urn by rolling a die, such that we have a 5/6 chance of choosing urn A and a 1/6 chance of choosing urn B. Then we take a fair, unbiased sample of 4 balls from whatever urn we chose. Suppose we draw out 1 red ball and 3 blue balls. Since this is an unbiased sample, does the process that you are calling "statistical analysis" have to estimate that we were drawing from urn B?

Second, you are trying too hard to make everything about the rooms. It's like someone was doing the problem with two urns from the previous paragraph, but tried to mathematically arrive at the answer only as a function of the number of red balls drawn, without making any reference to the process that causes them to draw from urn A vs. urn B. And they come up with several different ideas about what the function could be, and they call those functions "the Two-Thirds-B-er method" and "the Four-Tenths-B-er method." When really, both methods are incomplete because they fail to take into account what we know about how we picked the urn to draw from.

To answer the last part of your statement. If beauty randomly opens 8 doors and found them all red then she has a sample of pure red. By simple statistics she should give R=81 as the estimation. Halfer and thirders would both agree on that. If they do a bayesian analysis R=81 would also be the case with the highest probability. I'm not sure where 75 comes from I'm assuming by summing the multiples of probability and Rs in the bayesian analysis? But that value does not correspond to the estimation in statistics. Imagine you randomly draw 20 beans from a bag and they are all red, using statistics obviously you are not going to estimate the bag contains 90% red bean.

Think of it like this: if Beauty opens 8 doors and they're all red, and then she goes to open a ninth door, how likely should she think it is to be red? 100%, or something smaller than 100%? For predictions, we use the average of a probability distribution, not just its highest point.

Comment author: 29 July 2017 03:30:04PM *  0 points [-]

No problem, always good to have a discussion with someone serious about the subject matter.

First of all, you are right: statistic estimation and expected value in bayesian analysis are different. But that is not what I'm saying. What I'm saying is in a bayesian analysis with an uninformed prior (uniform) the case with highest probability should be the unbiased statistic estimation (it is not always so because round offs etc).

In the two urns example, I think what you meant is that using the sample of 4 balls a fair estimation would be 5 reds and 15 blues as in the case of B but bayesian analysis would give A as more likely? However this disagreement is due to the use of an informed prior, that you already know we are more likely to draw from A right from the beginning. Without knowing this bayesian would give B as the most likely case, same as statistic estimate.

Think of it like this: if Beauty opens 8 doors and they're all red, and then she goes to open a ninth door, how likely should she think it is to be red? 100%, or something smaller than 100%? For predictions, we use the average of a probability distribution, not just its highest point.

Definitely something smaller than 100%. Just because beauty thinks r=81 is the most likely case doesn't mean she think it is the only case. But that is not what the estimation is about. Maybe this question would be more relevant: If after opening 8 doors and they are all red and beauty have to guess R. what number should she guess (to be most likely correct)?

Comment author: 28 July 2017 10:19:23PM *  0 points [-]

He proposes the coin toss could happen after the first awakening. Beauty’s answer ought to remain the same regardless the timing of the toss. A simple calculation tells us his credence of H must be 1/3. As SSA dictates this is also beauty’s answer. Now beauty is predicting a fair coin toss yet to happen would most likely land on T. This supernatural predicting power is a conclusive evidence against SSA.

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

Sometimes people absolutely will come to different conclusions. And I think you're part of the way there with the idea of letting people talk to see if they converge. But I think you'll get the right answer even more often if you set up specific thought-experiment processes, and then had the imaginary people in those thought experiments bet against each other, and say the person (or group of people all with identical information) who made money on average (where "average" means over many re-runs of this specific thought experiment) had good probabilities, and the people who lost money had bad probabilities.

I don't think this is what probabilities mean, or that it's the most elegant way to find probabilities, but I think it's a pretty solid and non-confusing way. And there's a quite nice discussion article about it somewhere on this site that I can't find, sadly.

Comment author: 29 July 2017 02:39:05PM 0 points [-]

Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

I think Elga's argument is beauty's credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2/3 after learning it is Monday. If you think beauty shall give 1/2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.

Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty's break even odd is at 1/2 while the selector's is at 1/3, which agrees with there credence.

Comment author: 27 July 2017 09:48:55AM *  0 points [-]

Thanks for the kind words.

However, I don't agree. The additional 8 rooms is an unbiased sample of the remaining 80 rooms for beauty. The additional 8 rooms is only an unbiased sample of the full set of 81 rooms for beauty if the first room is also an unbiased sample (but I would not consider it a sample but part of the prior).

Actually I found a better argument against your original anti-thirder argument, regardless of where the prior/posterior line is drawn:

Imagine that the selector happened to encounter a red room first, before checking out the other 8 rooms. At this point in time, the selector's state of knowledge about the rooms, regardless of what you consider prior and what posterior, is in the same position as beauty's after she wakes up. (from the thirder perspective, which I generally agree with in this case). Then they both sample 8 more rooms. The selector considers this an unbiased sample of the remaining 80 rooms. After both have taken this additional sample of 8, they again agree. Since they still agree, beauty must also consider the 8 rooms to be an unbiased sample of the remaining 80 rooms. Beauty's reasoning and the selector's are the same regarding the additional 8 rooms, and Beauty has no more "supernatural predicting power" than the selector.

About only thirding getting the attention: my apologies for contributing to this asymetry. For me, the issue is, I found the perspectivism posts at least initially hard to understand, and since subjectively I feel I already know the correct way to handle this sort of problem, that reduces my motivation to persevere and figure out what you are saying. I'll try to get around to carefully reading them and providing some response eventually (no time right now).

Comment author: 27 July 2017 04:58:15PM *  0 points [-]

Ok, I should have use my words more carefully. We meant the same thing. When I say beauty think the 8 rooms are unbiased sample I meant what I listed as C: It is an unbiased for the other 80 rooms. So yes to what you said, sorry for the confusion. it is obvious because it is a simple random sample chosen from the 80 rooms. So that part there is no disagreement. The disagreement between the two is about whether or not the 9 rooms are an unbiased sample. Beauty as a thirder should not think it is unbiased but bases her estimation on it anyway to answer the question from the selector's perspective. If she does not answer from selector's perspective she would use the 8 rooms to estimate the reds in the other 80 rooms and then add her own room in, as halfers does.

Regarding the selector chooses a room and finds out it is red. Again they agree on whether or not the 8 rooms are unbiased, however because the first room is always red for beauty but not so for the selector they see the 9 rooms differently. From beauty's perspective dividing the 9 rooms into 2 parts and she gets a unbiased sample (8 rooms) and a red room. It is not so for the selector. We can list the three points from the selector's perspective and it poses no problem at all.

A: the 9 room is an unbiased sample for 81 rooms

B: the first room is randomly selected from all rooms

C: the other 8 rooms is an unbiased sample for other 80 rooms.

alternatively we can divid the 9 rooms as follows:

A: the 9 rooms is an unbiased sample for 81 rooms

B: the first red room he saw (if he saw one) is always red

C: the other 8 rooms in the sample is biased towards blue

Either way there is no problem. In terms of the predicting power, think of it this way. Once the selector sees a red room he knows if he ignore it and only consider the other 8 rooms then the sample is biased towards blue, nothing supernatural. However, for beauty if she thinks the 9 rooms are unbiased then the 8 rooms she chooses must be biased even though they are selected at random. Hence the "supernatural". It is meant to point out for beauty the 9 and 8 rooms cannot be unbiased at same time. Since you already acknowledged the 9 rooms is biased (for her perspective at least), then yes she does not have supernatural predicting power of course.

I guess the bottomline is because they acquire their information differently, the selector and thirder beauty must disagree somewhere. Either on the numerical value of estimate, or on if a sample is biased.

About the perspectivism posts. The concept is actually quite simple: each beauty only counts what she experienced/remembered. But I feel maybe I'm not doing a good job explaining it. Anyway, thank you for promising to check it out.

Comment author: 26 July 2017 01:36:34AM *  0 points [-]

Well argued, you've convinced me that most people would probably define what's prior and what's posterior the way you say. Nonetheless, I don't agree that what's prior and what's posterior should be defined the way you say. I see this sort of info as better thought of as a prior (precisely because waking up shouldn't be thought of as new info) [edit: clarification below]. I don't regard the mere fact that the brain instantiating the mind having this info is physically continuous with an earlier-in-time brain instantiating a mind with different info as sufficient to not make it better thought of as a prior.

Some clarification on my actual beliefs here: I'm not a conventional thirder believing in the conventional SIA. I prefer, let's call it, "instrumental epistemic rationality". I weight observers, not necessarily equally, but according to how much I care about the accuracy of the relevant beliefs of that potential observer. If I care equally about the beliefs of the different potential observers, then this reduces to SIA. But there are many circumstances where one would not care equally, e.g. one is in a simulation and another is not, or one is a Boltzmann brain and another is not.

Now, I generally think that thirdism is correct, because I think that, given the problem definition, for most purposes it's more reasonable to value the correctness of the observers equally in a sleeping beauty type problem. E.g. if Omega is going to bet with each observer, and beauty's future self collects the sum of the earnings of both observers in the case there are two of them, then 1/3 is correct. But if e.g. the first instance of the two observer case is valued at zero, or if for some bizarre reason you care equally about the average of the correctness of the observers in each universe regardless of differences in numbers, then 1/2 is correct.

Now, I'll deal with your last paragraph from my perspective, The first room isn't a sample, it's guaranteed red. If you do regard it as a sample, it's biased in the red direction (maximally) and so should have zero weight. The prior is that the probability of R is proportional to R. The other 8 rooms are an unbiased sample of the remaining rooms. The set of 9 rooms is a biased sample (biased in the red direction) such that it provides the same information as the set of 8 rooms. So use the red-biased prior and the unbiased (out of the remaining rooms after the first room is removed) 8 room sample to get the posterior esimate. This will result in the same answer the selector gets, because you can imagine the selector found a red room first and then break down the selector's information into that first sample and a second unbiased sample of 8 of the remaining rooms.

Edit: I didn't explain my concept of prior v. posterior clearly. To me, it's conceptual not time-based in nature. For a set problem like this, what someone knows from the problem definition, from the point of view of their position in the problem, is the prior. What they then observe leads to the posterior. Here, waking sleeping beauty learns nothing on waking up that she does not know from the problem definition, given that she is waking up in the problem. So her beliefs at this point are the prior. Of course, her beliefs are different from sleeping beauty before she went to sleep, due to the new info. That new info told her she is within the problem, when she wasn't before, so she updated her beliefs to new beliefs which would be a posterior belief outside the context of the problem, but within the context of the problem constitute her prior.

Comment author: 26 July 2017 06:49:18PM 1 point [-]

Very clear argument and many good points. Appreciate the effort.

Regarding your position on thirders vs halfers, I think it is a completely reasonable position and I agree with the analysis about when halfers are correct and when thirders are correct. However to me it seems to treat Sleeping Beauty more as a decision making problem rather than a probability problem. Maybe one's credence without relating consequences is not defined. However that seems counter intuitive to me. Naturally one should have a belief about the situation and her decisions should depend on it as well as her objective (how much beauty cases about other copies) and the payoff structure (is the money reward depends only on her own answer, or all correct answers or accuracy rate etc). If that's the case, there should exist a unique correct answer to the problem.

About how should beauty estimate R and treat the samples, I would say that's the best position for a thirder to take. In fact that's the same position I would take too. If I may reword it slightly, see if you agrees with this version: The 8 rooms is a unbiased sample for beauty, that is too obvious to argue otherwise. Her own room is always red so the 9 rooms is obviously biased for her. However from (an imaginary) selector's perspective if he finds the same 9 rooms it is an unbiased sample. Thirders think she should answer from the selector's perspective, (I think the most likely reason being she is repeatedly memory wiped makes her perspective somewhat "compromised") therefore she would estimate R to be 27. Is this version something you would agree?

In this version I highlighted the disagreement between the selector and beauty, the disagreement is not some numerical value but they disagree on whether a sample is biased. In my 4 posts all I'm trying to do is arguing for the validity and importance of perspective disagreement. If we recognize the existence of this disagreement and let each agent answers from her own perspective we get another system of reasoning different from SIA or SSA. It provides an argument for double halving, give a framework where frequentist and bayesians agrees with each other, reject Doomsday Argument, disagree with Presumptuous Philosopher, and rejects the Simulation Argument. I genuinely think this is the explanation to sleeping beauty problem as well as many problems related to anthropic reasoning. Sadly only the part arguing against thirding gets some attention.

Anyways, I digressed. Bottomline is, though I do no think it is the best position, I feel your argument is reasonable and well thought. I can understand it if people want to take it as their position.

Comment author: 26 July 2017 08:42:42AM 0 points [-]

Interesting. I guess the right question is, if you insist on a frequentist argument, how simple can you make it? Like I said, I don't expect things like unbiased estimates to behave intuitively. Can you make the argument about long run frequencies only? That would go a long way in convincing me that you found a genuine contradiction.

Comment author: 26 July 2017 04:52:26PM 0 points [-]

Yes, I have given a long run frequency argument for halving in part I. Sadly that part have not gotten any attention. My entire argument is about the importance of perspective disagreement in SBP. This counter argument is actually the less important part.

View more: Next