JoshuaZ comments on [Link] A superintelligent solution to the Fermi paradox - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
Existing matter seems highly redundant, and building a full-scale 1:1 replica, as it were, means you cannot opt for any amount of approximation by definition or possible optimization.
I would draw an analogy to NP problems: yes, the best way to solve the pathologically hardest instances of any NP problem is brute force, just like there are probably arrangements of matter which cannot be calculated more efficiently by computronium than the actual arrangement of matter. But nevertheless, SAT solvers run remarkably fast on many real-world problem and far faster than anyone focused on the general asymptotic behavior would expect, and we have no reason to believe the world itself is a pathological instance of worlds.
I don't find this argument persuasive or even strong. n qubits can't simulate n+1 qubits in general. In fact, n qubits can't even in general simulate n+1 bits. This suggests that if our understanding of the laws of physics are close to correct for our universe and the larger universe (whether holographic planetarium or simulationist), simulation should be tough.
That may be, but such a general point would be about arbitrary qubits or bits, when a simulation doesn't have to work over all or even most arrangements.
Hmm, so thinking about this more, I think that Holevo's theorem can probably be interpreted in a way that much more substantially restricts what one would need to know about the other n bits in order to simulate them, especially since one is apparently simulating not just bits but qubits. But I don't really have a good understanding of this sort of thing at all. Maybe someone who knows more can comment?
Another issue which backs up simulation being easier- if one cares primarily about life forms one doesn't need a detailed simulation then of the inside of planets and stars. The exact quantum state of every iron atom in the core of the planet for example shouldn't matter that much. So if one is mainly simulating the surface of a single planet in full detail, or even just the surfaces of a bunch of planets, that's a lot less computation.
One other issue is that I'm not sure you can have simulations run that much faster than your own physical reality (again assuming that the simulated universe uses the same basic physics as the underlying universe). See for example this paper which shows that most classical algorithms don't get major speedup from a quantum computer beyond a constant factor. That constant factor could be big, but this is a pretty strong result even before one is talking about general quantum algorithms. Of course, if the external world didn't quite work the same (say different constants for things like the speed of light) this might not be much of an issue at all.
Hmm, that's a good point. So it would then come down to how much of an expectation of what the simulation is likely to do do you need in order to get away with using fewer qubits. I don't have a good intuition for that, but the fact that BQP is likely to be fairly small compared to all of PSPACE suggests to me that one can't really get that much out of it. But that's a weak argument. Your remark makes me update in favor of simulationism being more plausible.