Can anyone tell me what's wrong with the following "refutation" of the simulation argument? (I know this is a bit long -- my apologies! I also posted an earlier draft several months ago and got some excellent feedback. I don't see a flaw, but perhaps I'm missing something!)
Consider the following three scenarios:
Scenario 1: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you guess that you’re in room X—and consequently, you almost certainly win 1 million dollars. After all, since betting odds are a guide to rationality, if everyone in room X and Y were to bet that they’re in room X, just about everyone would win.
Scenario 2: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then escorted into one of two rooms, either X or Y, but you don’t know which one. While in the unknown room, you are told that there are exactly 1,000 people in room X and only a single person in room Y. You are also told that over the past year, a total of 1 billion people have been in room Y at one time or another whereas only 10,000 people have been in room X. There is no way of communicating with anyone else, so you must use the information given to guess which room you’re in. If you guess correctly, you win 1 million dollars. The question here is: Does the extra information about the past histories of rooms X and Y change your mind about which room you’re in? It shouldn’t. After all, if everyone currently in rooms X and Y were to bet that they’re in room X, just about everyone would win.
Scenario 3: Imagine that you’re standing in a hallway, which we’ll label Location A. You are blindfolded and then told that you’ll be escorted into room Z through one of two rooms, either X or Y, but you won’t know which one. At any given moment, or timeslice, there will always be exactly 1,000 people in room X and only a single person in room Y. (Thus, as one person enters each room another one exits into room Z.) Once you arrive in room Z at time T2, you are told that between T1 and T2 a total of 1 billion people passed through room Y whereas only 10,000 people in total passed through room X, where all of these people are now in room Z with you. There is no way of communicating with anyone else, so you must use the information given to guess which room, X or Y, you passed through on your way from Location A to room Z. If you guess correctly, you win 1 million dollars. Using the principle of indifference as your guide, you now guess that you passed through room Y—and consequently, you almost certainly win 1 million dollars. After all, if everyone in room Z at T2 were to bet that they passed through room Y rather than room X, the large majority would win.
Let’s analyze these scenarios. In the first two, the only relevant information is synchronic information about the current distribution of people when you answer the question, “Which room am I in, X or Y?” (Thus, the historical knowledge offered in Scenario 2 doesn’t change your answer.) In contrast, the only relevant information in the third scenario is diachronic information about which of the two rooms had more people pass through them from T1 to T2. If these claims are correct, then the simulation argument proposed by Nick Bostrom (2003) is flawed. The remainder of this paper will (a) outline this argument, and (b) show how the ideas above falsify the argument’s conclusion.
According to the simulation argument, one or more of the following disjuncts must be true: (i) humanity goes extinct before reaching a stage of technological development that would enable us to run a large number of ancestral simulations; (ii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations but we decide not to; and (iii) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do, in fact, run a large number of ancestral simulations. The third disjunct entails that we would almost certainly live in a computer simulation because (a) a sufficiently high-resolution simulation would be sensorily and phenomenologically indistinguishable from the “real” world, and (b) the indifference principle tells us to distribute our probabilities evenly among all the possibilities if we have no special reason to favor one over another. Since the population of sims would far outnumber the population of non-sims in scenario (iii), ex hypothesi, then we would almost certainly be sims. This is the simulation hypothesis.
But consider the following possible Posthuman Future: instead of running a huge number of ancestral simulations in parallel, as Bostrom seems to assume we would, future humans run a huge number of simulations sequentially, one after another. This could be done such that at any given moment the total number of extant non-sims far exceeds the total number of extant sims, yet over time the total number of sims who have existed far exceeds the total number of non-sims who also have existed. (This could be accomplished by running simulations at speeds much faster than realtime.) If the question is, “Where am I right now, in a simulation or not?,” then the principle of indifference instructs you to answer, “I am not a sim.” After all, if everyone were to bet at some timeslice Tx that they are not a sim, nearly everyone would win.
Here the only information that matters is synchronic information; diachronic information about how many sims, non-sims, or “observer-moments” there have been has no bearing on one’s credence about one’s present ontological status (sim or non-sim?)—that is, no more than historical knowledge about rooms X and Y in Scenario 2 have any bearing on one’s response to the question, “Which room am I currently in?” This is problematic for the simulation argument because the Posthuman Future outlined above satisfies the condition of disjunct (iii) yet it doesn’t entail that one is almost certainly living in a simulation. Thus, Bostrom’s assertion that “at least one of the following propositions is true” is false.
One might wonder: but what if we run a huge number of simulations sequentially and then stop. Wouldn’t this be analogous to Scenario 3, in which we would have reason for believing that we passed through room Y rather than room X, i.e., that we were (and thus still are) in a simulation rather than the “real” world? The answer is no, it’s not analogous to Scenario 3 because in our case we would have some additional relevant information about our actual history—that is, we would know that we were in “room X,” which held more people at every given moment, since we would have control over the ratio of sims to non-sims (always making sure that the latter far outnumbers the former). Even more, if we were to stop all simulations, then the ratio of sims to non-sims would be zero to whatever the human population is at the time, thus making a bet that we are non-sims virtually certain. So far as I can tell, these conclusions follow whether one accepts the self-sampling assumption (SSA), strong self-sampling assumption (SSSA), or the self-indication assumption (SIA) (Bostrom 2002).
In sum, the simulation argument is missing a fourth disjunct: (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, yet the principle of indifference leads us to believe that we are not in a simulation. It will, of course, be up to future generations to decide whether to run a large number of ancestral simulations, and if so whether to run these sequentially or in parallel, given the ontological-epistemic implications of each.
There is a flaw in your argument. I'm going to try to be very precise here and spell out exactly what I agree with and disagree with in the hope that this leads to more fruitful discussion.
Your conclusions about scenarios 1, 2 and 3 are correct.
You state that Bostrom's disjunction is missing a fourth case. The way you state (iv) is problematical because you phrase it in terms of a logical conclusion that "the principle of indifference leads us to believe that we are not in a simulation" which, as I'll argue below, is incorrect. Your disjunct should properly be stated as something like (iv) humanity reaches a stage of technological development that enables us to run a large number of ancestral simulations and we do run a large number of ancestral simulations, however we do this in a way so as to keep the number of simulated people well below the number of real people at any given moment. Stated that way, it is clear that Bostrom's (iii) is meant to include that outcome. Bostrom's argument is predicated only on the number of ancestral simulations, not whether they are run in parallel or sequentially or how much time they are run over. The reason Bostrom includes your (iv) in (iii) is because it doesn't change the logic of the argument. Let me now explain why.
For the sake of argument let's split (iii) into two cases (iii.a) and (iii.b). Let (iii.a) be all the futures in (iii) not covered by your (iv). For convenience, I'll refer to this as "parallel" even though there are cases in (iv) where some simulations could be run in parallel. Then (iii.b) is equivalent to your (iv). For convenience, I'll refer to this as serial even though again, it might not be strictly serial. I think we agree that if the future were guaranteed to be (iii.a), then we should bet we are in a simulation.
First, even if you were right about (iii.b), I don't think it invalidates the argument. Essentially, you have just added another case similar to (ii), and it would still be the case that there are many more simulations that real people because of (iii.a) and we should bet that we are in a simulation.
Second, if the future is actually (iii.b) we should still bet we are in a simulation just as much as with (iii.a). At several points, you appeal to the principle of indifference, but you are vague on how this should be applied. Let me give a framework for thinking about this. What is happening here is that we are reasoning under indexical uncertainty. In each of your three scenarios and the simulation argument, there is uncertainty about which observer we are. Your statement that by the principle of indifference we should conclude something is actually saying what the SSA say which is that we should reason as if we are a randomly chosen observer. In Bostrom's terms, you are uncertain which observer in your reference class you are. To make sure we are on the same page, let me go through your scenarios using this approach.
Scenario 1: You are not sure if you are in room X or room Y, the set of all people currently in room X and Y is your reference class. You reason as if you could be a randomly selected one so you have a 1000 to 1 chance of being in room X.
Scenario 2: You are told about the many people who have been in room Y in the past. However, they are in your past. You have no uncertainty about your temporal index relative to them, so you do not add them to your reference class and reason the same as in scenario 1. Bostrom's book is weak here in that he doesn't give you very good rules for selecting your reference class. I'm arguing that one of the criteria is that you have to be uncertain if you could be that person or not. So for example, you know you are not one of the many people not currently in room X or Y so you don't include them in your reference class. Your reference class is the set of people you are unsure of your index relative to.
Scenario 3: This one is more tricky to reason correctly about. I think you are wrong when you say that the only relevant information here is diachronic information. You know you are now in room Z that contains 1 billion people who passed through room Y and 10,000 people who passed through room X. Your reference class is the people in room Z. You don't have to reason about the temporal information or the fact that at any given moment there was only one person in room Y but 1,000 people in room X. The passing through room X or Y is now only a property of the people in room Z. This is equivalent to if I said you are blindfolded in a room with 1 billion people wearing red hats and 10,000 people wearing blue hats. Which hat color should you bet you are wearing? Reasoning with the people in room Z as your reference class you correctly give your self a 1 billion to 10,000 chance of having passed through room Y.
In (iii.b), you are uncertain whether you are in a simulation or reality. But if you are in a simulation you are also uncertain where you are chronologically relative to reality. Thus if a pair of simulations were run in sequence, you would be unsure if you were in the first or second simulation. You have both spatial and temporal uncertainty. You aren't sure what the proper now is. Your reference class includes everyone in the historical reality as well as everyone in all the simulations. Given that as your reference class, you should reason that you are in a simulation (assuming many simulations are run). It doesn't matter that those simulations are run serially, only that many of them are run. Your reference class isn't limited to the current simulation and the current reality because you aren't sure where you are chronologically relative to reality.
With regards to SIA or SSA. I can't say that they make any difference to your position because the problem is that you have chosen the wrong reference class. In the original simulation argument, SIA vs. SSA makes little or no difference because presumably, the number of people living in historical reality is roughly equal to the number of people living in any given simulation. SIA only changes the conclusions when one outcome contains many more observers than the other. Here we treat each simulation as a different possible outcome, and so they agree.