Xianda_GAO_duplicate0.5321505782395719

Wiki Contributions

Comments

The doomsday argument is controversial not because its conclusion is bleak but because it has some pretty hard to explain implications. Like the choice of reference class is arbitrary but affects the conclusion, it also gives some unreasonable predicting power and backward causations. Anyone trying to understand it would eventually have to reject the argument or find some way to reconcile with these implications. To me neither position are biased as long as it is sufficiently argued.

The post specifically explained why your properties cannot be used for predictions in the context of doomsday argument and sleeping beauty problem. I would like to know your thoughts on that.

I will just post the relationship between perspective reasoning and simulation argument here.

In 2003 Nick Bostrom published his paper “Are you living in a computer simulation?”. In that paper he suggested once a civilization reaches a highly developed state it would have enough computing power to run “ancestral simulations”. Such simulations would be indistinguishable from actual reality for its occupants. Furthermore, because the potential number and levels of such simulated realities is huge, almost all observers with experiences similar to ours would be living in such simulations. Therefore either civilizations such as ours would never run such ancestral simulations or we are almost certainly living in a simulation right now. Perhaps one of its most striking conclusions is once we develop a ancestral simulation, or believes we eventually would develop one, then we shall conclude we are simulated as well. This highly specific world creation theory, while seems very unlikely at first glance, shall be deemed as almost certain if we apply the probability reasoning described in the argument. I would argue that such probability reasoning is in fact mistaken.

The argument states if almost all of observers with experiences similar to ours are simulated, we shall conclude we are almost certainly simulated. The core of this reasoning is self-sampling assumption (SSA) which states an observer shall reason as if she is randomly selected from all observers. The top contender to SSA, which is used as a counter argument to one of its most (in)famous applications: doomsday argument, is self-indication assumption (SIA). SIA states an observer shall reason as if she is randomly selected from all potential observers. However if we apply SIA to this idea the result is even further confirmation that we are simulated. Whether or not we would be able to run an ancestral simulation is no longer relevant, the fact that we exist is evidence suggesting our reality is simulated.

However, if we apply the same perspective reasoning used in the sleeping beauty problem this argument falls apart. Perspective reasoning states due to the existence of perspective disagreement between agents, an observer shouldn't reason as an imaginary third party who randomly selected the observer from a certain reference class. Picture a third party (god) randomly chooses a person from all realities, it is obvious the selected person is most likely simulated if majority of the observers are. Without this logic however an observer could no longer make such conclusion. Therefore even after running an ancestral simulation our credence of being simulated would not instantly jump to near certain.

The immediate opposition to this would be: in the duplicating beauty problem upon learning the coin landed on T beauty’s credence of being the clone would rise from 1/4 to 1/2, then why our credence of being simulated does not rise accordingly once we run ancestral simulations? After all the former case confirms the existence of a clone while the latter case confirms the existence of many simulated realities. The distinction here is the clone and the original are in symmetrical positions, whereas our reality and the realities simulated by us are not. In case of duplicating beauty, although they can have different experience after waking up, the original and the clone have identical information about the same coin toss. Due to this epistemic equivalence beauty cannot tell if she is the clone or the original. Therefore upon learning the coin landed on T thus confirming the existence of a clone both beauties must reason she is equally likely to be the clone and the original. In another word, the rise of credence is due to the confirmed existence of a symmetrical counterpart not due to the mere existence of someone in an imaginary reference class to choose from. But running an ancestral simulation only confirms the latter. Putting it blatantly, we know for sure we are not in the simulations we run so no matter how many simulation we run our credence of being in an ancestral simulation should not rise.

In fact I would suggest following the logic of Bostrom’s argument we should reduce our credence of living in a simulated reality once we run an ancestral simulation. As stated in his paper, simulators might want to edit their simulations to conserve computational power. A simulated reality running its own subsequent levels of simulations is going to require exponential amount of additional computational power. It is in the simulator’s interest to edit their simulation so they never reach such an advanced state with high computational capabilities. This means a base level reality is more likely to produces ancestral simulations than the simulated ones. Therefore once we runs such ancestral simulations, or strongly believe we are going to do so, our credence of being simulated shall decease.

Yes, that's why I think to this day Elga's counter argument is still the best.

No problem, always good to have a discussion with someone serious about the subject matter.

First of all, you are right: statistic estimation and expected value in bayesian analysis are different. But that is not what I'm saying. What I'm saying is in a bayesian analysis with an uninformed prior (uniform) the case with highest probability should be the unbiased statistic estimation (it is not always so because round offs etc).

In the two urns example, I think what you meant is that using the sample of 4 balls a fair estimation would be 5 reds and 15 blues as in the case of B but bayesian analysis would give A as more likely? However this disagreement is due to the use of an informed prior, that you already know we are more likely to draw from A right from the beginning. Without knowing this bayesian would give B as the most likely case, same as statistic estimate.

Think of it like this: if Beauty opens 8 doors and they're all red, and then she goes to open a ninth door, how likely should she think it is to be red? 100%, or something smaller than 100%? For predictions, we use the average of a probability distribution, not just its highest point.

Definitely something smaller than 100%. Just because beauty thinks r=81 is the most likely case doesn't mean she think it is the only case. But that is not what the estimation is about. Maybe this question would be more relevant: If after opening 8 doors and they are all red and beauty have to guess R. what number should she guess (to be most likely correct)?

Thank you for the reply. I really appreciate it since it reminds me that I have made a mistake in my argument. I didn't say SSA means reasoning as if an observer is randomly selected from all actually existent observers ( past, present and /b/future/b/).

So how do you get Beauty's prediction? If at the end of the first day you ask for a prediction on the coin, but you don't ask on the second day, then now Beauty knows that the coin flip is, as you say, yet to happen, and so she goes back to predicting 50/50. She only deviates from 50/50 when she thinks there's some chance that the coin flip has already happened.

I think Elga's argument is beauty's credence should not be dependent on the exact time of coin toss. It seems reasonable to me since the experiment can be carried out exact the same way no matter if the coin is tosses on Sunday or Monday night. According to SSA beauty should update credence of H to 2/3 after learning it is Monday. If you think beauty shall give 1/2 if she finds out the coin is tossed on Monday night then her answer would be dependent on the time of coin toss. Which to me seems a rather weak position.

Regarding a betting odds argument. I have give a frequentist model in part I which uses betting odds as part of the argument. In essence, beauty's break even odd is at 1/2 while the selector's is at 1/3, which agrees with there credence.

Ok, I should have use my words more carefully. We meant the same thing. When I say beauty think the 8 rooms are unbiased sample I meant what I listed as C: It is an unbiased for the other 80 rooms. So yes to what you said, sorry for the confusion. it is obvious because it is a simple random sample chosen from the 80 rooms. So that part there is no disagreement. The disagreement between the two is about whether or not the 9 rooms are an unbiased sample. Beauty as a thirder should not think it is unbiased but bases her estimation on it anyway to answer the question from the selector's perspective. If she does not answer from selector's perspective she would use the 8 rooms to estimate the reds in the other 80 rooms and then add her own room in, as halfers does.

Regarding the selector chooses a room and finds out it is red. Again they agree on whether or not the 8 rooms are unbiased, however because the first room is always red for beauty but not so for the selector they see the 9 rooms differently. From beauty's perspective dividing the 9 rooms into 2 parts and she gets a unbiased sample (8 rooms) and a red room. It is not so for the selector. We can list the three points from the selector's perspective and it poses no problem at all.

A: the 9 room is an unbiased sample for 81 rooms

B: the first room is randomly selected from all rooms

C: the other 8 rooms is an unbiased sample for other 80 rooms.

alternatively we can divid the 9 rooms as follows:

A: the 9 rooms is an unbiased sample for 81 rooms

B: the first red room he saw (if he saw one) is always red

C: the other 8 rooms in the sample is biased towards blue

Either way there is no problem. In terms of the predicting power, think of it this way. Once the selector sees a red room he knows if he ignore it and only consider the other 8 rooms then the sample is biased towards blue, nothing supernatural. However, for beauty if she thinks the 9 rooms are unbiased then the 8 rooms she chooses must be biased even though they are selected at random. Hence the "supernatural". It is meant to point out for beauty the 9 and 8 rooms cannot be unbiased at same time. Since you already acknowledged the 9 rooms is biased (for her perspective at least), then yes she does not have supernatural predicting power of course.

I guess the bottomline is because they acquire their information differently, the selector and thirder beauty must disagree somewhere. Either on the numerical value of estimate, or on if a sample is biased.

About the perspectivism posts. The concept is actually quite simple: each beauty only counts what she experienced/remembered. But I feel maybe I'm not doing a good job explaining it. Anyway, thank you for promising to check it out.

Very clear argument and many good points. Appreciate the effort.

Regarding your position on thirders vs halfers, I think it is a completely reasonable position and I agree with the analysis about when halfers are correct and when thirders are correct. However to me it seems to treat Sleeping Beauty more as a decision making problem rather than a probability problem. Maybe one's credence without relating consequences is not defined. However that seems counter intuitive to me. Naturally one should have a belief about the situation and her decisions should depend on it as well as her objective (how much beauty cases about other copies) and the payoff structure (is the money reward depends only on her own answer, or all correct answers or accuracy rate etc). If that's the case, there should exist a unique correct answer to the problem.

About how should beauty estimate R and treat the samples, I would say that's the best position for a thirder to take. In fact that's the same position I would take too. If I may reword it slightly, see if you agrees with this version: The 8 rooms is a unbiased sample for beauty, that is too obvious to argue otherwise. Her own room is always red so the 9 rooms is obviously biased for her. However from (an imaginary) selector's perspective if he finds the same 9 rooms it is an unbiased sample. Thirders think she should answer from the selector's perspective, (I think the most likely reason being she is repeatedly memory wiped makes her perspective somewhat "compromised") therefore she would estimate R to be 27. Is this version something you would agree?

In this version I highlighted the disagreement between the selector and beauty, the disagreement is not some numerical value but they disagree on whether a sample is biased. In my 4 posts all I'm trying to do is arguing for the validity and importance of perspective disagreement. If we recognize the existence of this disagreement and let each agent answers from her own perspective we get another system of reasoning different from SIA or SSA. It provides an argument for double halving, give a framework where frequentist and bayesians agrees with each other, reject Doomsday Argument, disagree with Presumptuous Philosopher, and rejects the Simulation Argument. I genuinely think this is the explanation to sleeping beauty problem as well as many problems related to anthropic reasoning. Sadly only the part arguing against thirding gets some attention.

Anyways, I digressed. Bottomline is, though I do no think it is the best position, I feel your argument is reasonable and well thought. I can understand it if people want to take it as their position.

Yes, I have given a long run frequency argument for halving in part I. Sadly that part have not gotten any attention. My entire argument is about the importance of perspective disagreement in SBP. This counter argument is actually the less important part.

OK, I misunderstood. I interpreted the coin is biased 1/3 to 2/3 but we don't know which side it favours. If we start from uniform (1/2 to H and 1/2 to T), then the maximum likelihood is Tails.

Unless I misunderstood again, you mean there is a coin we want to guess its natural chance (forgive me if I'm misusing terms here). We do know its chance is bounded between 1/3 and 2/3. In this case yes, the statistical estimate is 0 while the maximum likelihood is 1/3. However it is obviously due to the use of a informed prior (that we know it is between 1/3 and 2/3). Hardly a surprise.

Also I want to point out in your previous example you said SIA+frequentist never had any strong defenders. That is not true. Until now in literatures thirding are generally considered to be a better fit for frequentist than halving. Because long run frequency of Tail awakening is twice as many as Head awakenings. Such arguments are used by published academics including Elga. Therefore I would consider my attack from the frequentist angle has some value.

Load More