Jack comments on Parapsychology: the control group for science - Less Wrong

62 Post author: AllanCrossman 05 December 2009 10:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Jack 07 December 2009 09:30:18PM *  2 points [-]

When I was first introduced to quantum mechanics my professor taught us the Copenhagen Interpretation. I was immediately reminded of occasional moments in video games where features of a room aren't run until the player gets to the room. It seemed to me that only collapsing the wave function when it interacted with a particular kind of physical system (or a conscious system!) would be a really good way to conserve computing power and that it seemed like the kind of hack programmers in an fully Newtonian universe might use to approximate their universe without having to calculate the trajectories of a googolplex (ed) subatomic particles.

Can anyone tell me if this actually would save computing power/memory?

Comment author: SilasBarta 07 December 2009 09:48:49PM *  2 points [-]

The answer basically comes down to the issue of saving on RAM vs. saving on ROM. (RAM = amount of memory need to implement the algorithm, ROM = amount of memory needed to describe the algorithm)

Video game programmers have to care about RAM, while the universe (in its capacity as a simulator) does not. That's why programmers generate only what they have to, while the universe can afford to just compute everything.

However, I asked the same question, which is what led to the blog post linked above, where I concluded that you wouldn't save memory by only doing the computations for things observers look at: first, because they check for consistency and come back to verify that the laws of physics still work, forcing you to generate the object twice.

But more importantly (as I mentioned) because the 2nd law of thermodynamics means that any time you gain information about something in the universe, you necessarily lose just as much in the process of making that observation (for a human, it takes the form of e.g. waste heat, higher-entropy decomposition of fuels). So by learning about the universe through observation, you simultaneously relieve it of having to store at least as much information (about e.g. subatomic particles).

(This argument has not been peer-reviewed, but was based on Yudkowsky's Engines of Cognition post.)

Comment author: matt 07 December 2009 10:53:46PM 2 points [-]

googleplex = Google Inc's HQ

googolplex = 10^(10^100)

Comment author: Blueberry 08 December 2009 12:16:43AM -2 points [-]

It's truly sad now how people are less familiar with the original spelling and meaning of a googol. Now the first thing we think of is the search engine, instead of 10^100.

Comment author: gwern 08 December 2009 06:52:19PM 5 points [-]

Is that really so sad? googol was named in jest and I do not think I have ever seen it seriously needed for anything; Google on the other hand...

Comment author: pengvado 08 December 2009 06:59:36AM 1 point [-]

Assuming they don't make any approximations other than collapse, yes a classical computer simulating Copenhagen takes fewer arithmetic ops than simulating MWI. At least until someone in the simulation builds a sufficiently large coherent system (quantum computer), at which point the simulator has to choose between forbidding it (i.e. breaking the approximation guarantee) or spending exponentially many arithmetic ops.

Copenhagen (even in the absence of large coherent subsystems) does not take significantly less memory than MWI: both are in PSPACE.

Otoh, if the simulator is running on quantum-like physics too, then there's no asymptotic difference in arithmetic either. And if you're not going to assume that the simulator's physics is similar to ours, who says it's less rather than more computationally capable?