brazil84 comments on Does the simulation argument even need simulations? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (102)
My thought is that your hypothesis is pretty similar to the Dust Theory.
http://sciencefiction.com/2011/05/23/science-feature-dust-theory/
And Greg Egan's counter-argument to the Dust Theory is pretty decent:
I think the same counter-argument applies to your hypothesis.
A steelmanned version of Egan's counterargument can be found in what Tegmark calls the (cosmological) measure problem. Egan's original counterargument is too weak because we can simply postulate that there is an appropriate measure over the worlds of interest; we already do that for the many-worlds interpretation!
In Tegmark (2008) (see my other comment):
Tegmark makes a few remarks on using algorithmic complexity as the measure:
Each of the analogous problems in eternal inflation and the string theory landscape is also called the measure problem (in eternal inflation: how to assign measure over the potentially infinite number of inflationary bubbles; in the string theory landscape: how to assign measure over the astronomical number of false vacua).
In the many-worlds interpretation, the analogous measure problem is resolved by the Born probabilities.
I don't understand this at all. Can you give an example of such an appropriate measure?
An example of a measure in this context would be the complexity measure that Tegmark mentioned, as long as we agree on a way to encode mathematical structures (the nonuniqueness of representation is one of the issues that Tegmark brought up).
Whether this is an appropriate measure (i.e., whether it correctly "predicts conditional probabilities for what an observer should perceive given past observations") is unknown; if we knew how to find out, then we could directly resolve the measure problem!
An example of a context where we can give the explicit measure is in the many-words interpretation, where as I mentioned, the Born probabilities resolve the analogous measure problem.
So you are saying that the "Born probabilities" are an example of an "appropriate measure" which, if "postulated," rebuts Egan's argument?
Is that correct?
The Born probabilities apply to a different context - the multiple Everett branches of MWI, rather than the interpretative universes available under dust theory. If we had an equivalent of the Born probabilities - a measure - for dust theory, then we'd be able to resolve Egan's argument one way or another (depending on which way the numbers came out under this measure).
Since we don't yet know what the measure is, it's not clear whether Egan's argument holds - under the "Tengmark computational complexity measure" Egan would be wrong, under the "naive measure" Egan is right. But we need some external evidence to know which measure to use. (By contrast in the QM case we know the Born probabilities are the correct ones to use, because they correspond to experimental results (and also because e.g. they're preserved under a QM system's unitary evolution)).
I would guess you are probably correct that Egan's argument hinges on this point. In essence, Egan seems to be making an informal claim about the relatively likelihood of an orderly dust universe versus a chaotic one.
Boiled down to its essentials, VincentYu's argument seems to be that if Egan's informal claim is incorrect, then Egan's argument fails. Well duh.
Here's a visual representation of the dust theory by Randall Munroe: http://xkcd.com/505/
Glad to see this has been thought of; that argument was where I was headed in [3] (and this whole line of thought greatly annoyed me when reading Permutation City, so I'm glad Egan's at least looked at it a bit).
This gets us a contradiction, not a refutation, and one man's modus ponens is another man's modus tollens. Can we use this to argue for a flaw in the original simulation argument? I think it again comes down to anthropics: why are our subjective experiences reverse-anthropically more likely than those of dust arrangements? And into which class would simulated people fall?
I don't think so since it's reasonable to hypothesize that man-made simulations would, generally speaking, by more on the orderly side as opposed to being full of random nonsense.
But it's still an interesting question. One can imagine a room with 2 large computers. The first computer has been carefully programmed to simulate 1950s Los Angeles. There are people in the simulation who are completely convinced that the live in Los Angeles in the 1950s.
The second computer is just doing random computations. But arguably there is some cryptographic interpretation of those computations which also yields a simulation of 1950s Los Angeles.
I'd like to see that argument. If you can find a mapping that doesn't end up encoding the simulation in the mapping, I'd be surprised.
Well why should it matter if the simulation is encoded in the mapping?
If it is, that screens off any features of what it's mapping; you can no longer be surprised that 'random noise' produces such output.
Again, so what?
Let me adjust the original thought experiment:
The operation first computer is encrypted using a very large one-time pad.
I'm not sure I agree with that argument. The fact that quantum mechanics exists, and there are specifically allowed states, is exactly the type of thing I'd expect from a universe driven by a computer simulation. Discrete values are much easier than continuous sets.
On the other hand, superposition and entanglement seem suboptimal.
I'm not sure I understand your point. Are you saying that a simulation which is just a mathematical construct would probably not result in a quantized universe?
I was intending to say the opposite; that a quantized world would seem like it would take less computational power than a continuous one, therefore the fact that we live in a quantized world is evidence of being in a simulation.
That's not an unreasonable point, but I think it goes more to the issue of simulation versus non-simulation than the issue of computer-based simulation versus mathematical construct simulation.
Well, I suppose we could postulate something like a continuous version of quantum mechanics for a host universe if we'd like.