Basically I think you're confused. You correctly begin to identify the core of the problem and then don't engage with it here:
It seems like what counts as a perfect simulation hides most of the complexity of this problem. For now, I'm going to hand wave a bit and say a perfect simulation is anything that can asses the truth value of a arbitrary proposition about the universe.
Interpreting this definition sufficiently strongly, the following straightforward diagonalization argument can be applied: suppose you had such a simulation, and that it had some sort of output channel that it used to display answers to questions about the universe. Ask it "is the answer to this question that's about to be displayed in the output channel no?"
In the positive direction, there is the following lovely theorem: without loss of generality, you can always assume that a program has access to a copy of its own source code. Given this fact, one might try to apply the above diagonalization argument, as follows: a program attempts to run itself, outputting true if it outputs false, and outputting false if it outputs true. What happens?
Straightforward: the program doesn't halt, outputting nothing; you can think of this as being the result of the program repeatedly calling itself over and over again. A slight modification of this argument produces the usual proof of the undecidability of the halting problem.
Thanks for the feedback!
Basically I think you're confused. You correctly begin to identify the core of the problem and then don't engage with it here
Agreed, good catch.
I think that when I proposed that definition, I quickly thought "Well nothing like that could exist" then swapped out the definition with something very vague like, "A perfectly accurate simulation would be a thing that feels reasonable to call a perfect simulation, yet not as powerful as what I just described", and then carried on reasoning from there.
Giving it more thought, I find the original definition of perfect simulation to be a good fit, and the original question is now resolved.
Though I no longer see the sub-string problem I proposed as relating to the original problem, I'm curious about what you think would be a meaningful definition of a "perfect encoding".
I'm curious about what you think would be a meaningful definition of a "perfect encoding".
Part of the point of the excerpt you quoted from Aaronson is that in any notion of an encoding, some of the computational work is being done by the decoding procedure, whatever that is. So e.g. you can specify a programming language, and build a compiler that will compile and execute programs in that programming language, and then talk about a program perfectly encoding something if it outputs that thing when run. Some of the computational work is being done by the program but some of it's being done by the compiler.
I don't have a technical answer, but for what it's worth I've thought about this idea and related concerns before and believe it is probably impossible. My reasoning depends on some combo of thinking about MWI, the uncomputability of rationality (cf. Hutter's related work from AIXI), and the observed evidence that probably P!=NP.
I was recently arguing with a friend over whether or not our universe could contain a perfect simulation of itself. I've got a feeling that this question has already been discussed on LW, and possibly discussed to the point of being answered and agreed upon. This has been a fun problem to try and think through, and I'd appreciate feedback/comments of the general form of my reasoning.
Can our universe contain a perfect simulation of itself?
It seems like what counts as a perfect simulation hides most of the complexity of this problem. For now, I'm going to hand wave a bit and say a perfect simulation is anything that can asses the truth value of a arbitrary proposition about the universe.
Simulating the universe in "hard mode" (when you are within the universe) seems to be, or at least be similar to, the problem of naturalized induction. From reading only that paper, I don't get a sense that those working on naturalized induction are trying to do anything as crazy as perfectly simulate the universe.
Creating a perfect simulation has a strong feeling of mathematical impossibility to it. Here's a chain of logic to try and find an easier proposition to disprove.
1. If you cannot create an accurate "snapshot" of the entire universe, you cannot have a system that accurately models the evolution of the universe's state over time.
2. If it is impossible for a snapshot of the universe to exist within the universe, then you can't create a snapshot.
3. If a binary string of length n called the universe, cannot be completely encoded into a sub-string of length m where m < n, then in general, a universe cannot contain a snapshot of itself.
2-3 is a jump that I think is valid, yet I'd have a hard time defending. It comes from a sense of, "If we can't solve the simplified binary string version, we won't be able to solve the harder version."
Here's a "solution" to the sub-string encoding problem that feels like cheating:
The part that feels like screwy is the "and the sub-string bit pattern is described by the sub-string bit pattern itself." While, yes, if the above scheme was used, then an "outside observer" who was aware of the encoding scheme could look at only the sub-string an be able to recreate the entire universe string.
But if we are relying on an outside the universe observer to just know what encoding pattern is being used, that's just offloading the work to outside the universe! My sense is that if we have to specify any part of this encoding outside of the universe, it can't be meaningfully considered to be an encoding.
Another "solution" that feels like cheating:
This can be handled with an argument from Scott Aaronson's "Why Philosophers Should Care About Computational Complexity" (which was mostly over my head). From the part about Computationalism and Waterfalls:
So it would only be meaningful to say that a sub-string encodes the universe string if the Turing Machine that did the decoding was less complex than the universe string itself.
Putting that all together, here's my shot at what's need for a Minimum Meaningful Encoding:
I still believe that it's impossible to have such a universe string, but that seems to come mostly from fuzzy intuitions I'm having trouble pining down. This is also the point where I wish I'd already taken a class in compatibility and logic.
For Feedback:
1. Was the reduction to the sub-string problem reasonable?
2. What do you think of the proposed Minimum Meaningful Encoding? (Shannon's source coding theorem seemed like it might have been useful, but I couldn't parse it)
3. Does this entire chain feel misguided, or valid but not rigorous enough + unfinished?
4. How might you prove or disprove the original claim?