The simulation argument, as I understand it:
- Subjectively, existing as a human in the real, physical universe is indistinguishable from existing as a simulated human in a simulated universe
- Anthropically, there is no reason to privilege one over the other: if there exist k real humans and l simulated humans undergoing one's subjective experience, one's odds of being a real human are k/(k+l)
- Any civilization capable of simulating a universe is quite likely to simulate an enormous number of them
- Even if most capable civilizations simulate only a few universes for e.g. ethical reasons, civilizations that have no such concerns could simulate such enormous numbers of universes that the expected number of universes simulated by any simulation-capable civilization is still huge
- Our present civilization is likely to reach the point where it can simulate a universe reasonably soon
- By 3. and 4., there exist (at some point in history) huge numbers of simulated universes, and therefore huge numbers of simulated humans living in simulated universes
- By 2. and 5., our odds of being real humans are tiny (unless we reject 4, by assuming that humanity will never reach the stage of running such simulations)
When we talk about a simulation we're usually thinking of a computer; crudely, we'd represent the universe as a giant array of bytes in RAM, and have some enormously complicated program that could compute the next state of the simulated universe from the previous one[1]. Fundamentally, we're just storing one big number, then performing a calculation and store another number, and so on. In fact our program is simply another number (witness the DeCSS "illegal prime"). This is effectively the GLUT concept applied to the whole universe.
But numbers are just... numbers. If we have a computer calculating the fibonacci sequence, it's hard to see that running the calculating program makes this sequence any more real than if we had just conceptualized the rule[2] - or even, to a mathematical Platonist, if we'd never thought of it at all. And we do know the rule (modulo having a theory of quantum gravity), and the initial state of the universe is (to the best of our knowledge) small and simple enough that we could describe it, or another similar but subtly different universe, in terms small enough to write down. At that point, what we have seems in some sense to be a simulated universe, just as real as if we'd run a computer to calculate it all.
Possible ways out that I can see:
- Bite the bullet: we are most likely not even a computer simulation, just a mathematical construct[3]
- Accept the other conclusion: either simulations are impractical even for posthuman civilizations, or posthuman civilization is unlikely. But if all that's required for a simulation is a mathematical form for the true laws of physics, and knowledge of some early state of the universe, this means humanity is unlikely to ever learn these two things, which is... disturbing, to say the least. This stance also seems to require rejecting mathematical Platonism and adopting some form of finitist/constructivist position, in which a mathematical notion does not exist until we have constructed it
- Argue that something important to the anthropic argument is lost in the move from a computer calculation to a mathematical expression. This seems to require rejecting the Church-Turing thesis and means most established programming theory would be useless in the programming of a simulation[4]
- Some other counter to the simulation argument. To me the anthropic part (i.e. step 2) seems the least certain; it appears to be false under e.g. UDASSA, though I don't know enough about anthropics to say more
Thoughts?
[1] As I understand it there is no contradiction with relativity; we perform the simulation in some particular frame, but obtain the same events whichever frame we choose
[2] This equivalence is not just speculative. Going back to thinking about computer programs, Haskell (probably the language most likely to be used for a universe simulation, at least at present technology levels) follows lazy evaluation: a value is not calculated unless it is used. Thus if our simulation contained some regions that had no causal effect on subsequent steps (e.g. some people on a spaceship falling into a black hole), the simulation wouldn't bother to evaluate them[5]
If we upload people who then make phone calls to their relatives to convince them to upload, clearly those people must have been calculated - or at least, enough of them to talk on the phone. But what about a loner who chooses to talk to no-one? Such a person could be more efficiently stored as their initial state plus a counter of how many times the function needs to be run to evaluate them, if anyone were to talk to them. If no-one has their contact details any more, we wouldn't even need to store that much. What about when all humans have uploaded? Sure, you could calculate the world-state for each step explicitly, but that would be wasteful. Our simulated world would still produce the correct outputs if all it did was increment a tick counter
Practically every programming runtime performs some (more limited) form of this, using dataflow analysis, instruction reordering and dead code elimination - usually without the programmer having to explicitly request it. Thus if your theory of anthropics says that an "optimized" simulation is counted differently from a "full" one, then there is little hope of constructing such a thing without developing a significant amount of new tools and programming techniques[4]
[3] Indeed, with an appropriate anthropic argument this might explain why the rules of physics are mathematically simple. I am planning another post on this line of thought
[4] This is worrying if one is in favour of uploading, particularly forcibly - it would be extremely problematic morally if uploads were in some sense "less real" than biological people
[5] One possible way out is that the laws of physics appear to be information-preserving; to simulate the state of the universe at time t=100 you can't discard any part of the state of the universe at time t=50, and must in some sense have calculated all the intermediate steps (though not necessarily explicitly - the state at t=20 could be spread out between several calculations, never appearing in memory as a single number). I don't think this affects the wider argument though
Empirical: based on, concerned with, or verifiable by observation or experience rather than theory or pure logic.
A universe that is totally disconnected is unverifiable by observation and experience. It lies in the realm of pure logic. It leaves no empirical traces.
Granted, there are also some possible universes that are logically connected and yet leave no empirical traces. (One example of this is the "Heaven" hypothesis, which postulates a place which is totally unobservable at the present time. So our universe has an effect on Heaven-verse, creating a unidirectional causal link... but Heaven has no effect on us. It's the same with your example - the past has a unidirectional causal link with various possible futures.)
So yes, I bite the thing you regard as a bullet. There are not necessarily any empirical differences. I still think that when the common person says "Reality", they mean something closer to my definition - something with a causal interaction with you. That's why people might say "heaven is real, despite the lack of evidence" or "Russel's Teapot might be real, though it's unlikely" but they never say "Harry Potter is real, despite the lack of evidence" or "Set theory is real, despite the lack of evidence".
All of these things can be represented totally unobservable logical structures, but only the Heaven structure is proposed to interact with our universe - so only the Heaven structure is a hypothesis about reality. The rest are fantasy and mathematics.
(If you want empiricism, I will say that the most parsimonious hypothesis is strictly limited to choosing the smallest logical structure which explains all observable things.)
Edit:
Oh cool - you've made me realized that my definition of reality implies random events create a universe for each option (so a stochastic coin flip creates a "heads" universe and a "tails" universe, both "real" | real = "causal interaction in either direction"). I hadn't explicitly recognized that yet. Thanks!
I think I'm actually fairly comfortable with that. However it does seem to run slightly contrary to layman use of "reality" and I like to keep my rigorized definitions of words as close as possible to the unrigorous layman's usage. I might be returning with a slightly revised definition which tackles some of the weirdness surrounding unidirectional relationships. If I can't find one, I bite the bullet and accept the divergence of my "reality" from layman's "reality" via "universes with randomness have many real worlds, splitting for each random event". Doesn't seem like too harsh of a bullet though - laymen's definitions aren't always internally consistent and do sometimes collapse under rigorization. If I find that I can't wiggle out of this, it does mean that I might have to think more about anthropics and slightly alter the way I conceptualize the relationship between my "utility" function and what I've been calling reality.
(I still think your ontology of "all tautologies are real" is even farther from laymen's ontology and possibly makes morality go all funny for the reasons described in my top post on the topic. Not sure whether you think distance from laymen's definitions is something worth minimizing, but figuring out how utility/morality works in your ontology is important)
I try not to say "reality" - I don't think laypeople have an intuition about the case wher... (read more)