Pablo_Stafforini comments on Parapsychology: the control group for science - Less Wrong

62 Post author: AllanCrossman 05 December 2009 10:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (184)

You are viewing a single comment's thread. Show more comments above.

Comment author: Pablo_Stafforini 06 December 2009 05:43:39PM *  0 points [-]

The hypothesis that we are living in the Matrix is best understood as a metaphysical hypothesis. The various claims made by parapsychologists, however, are not metaphysical claims about the nature of reality, but "scientific" claims about what goes on in reality. It is therefore unclear why such claims would be more probable on the assumption that the Matrix hypothesis is true.

Comment author: Psychohistorian 07 December 2009 04:08:19AM *  1 point [-]

I am not surprised when a video game character consistently summons balls of fire out of nothingness. I would be absolutely astounded to see an actual person do this. This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.

If we were living in the matrix, this would not be the case. It would not mean that we are necessarily in the kind of video game where there are psychic powers, but it would provide a very clear mechanism through which psychic powers could act. Such a mechanism does not appear possible in a deterministic universe, or at least in the one we seem to occupy.

Comment author: Vladimir_Nesov 07 December 2009 05:48:27AM *  8 points [-]

Real world is uncaring, unsupervised. Magic is not just about the world being "complex", it's about the world containing mechanisms targeting specifically humans, and understanding the situation much like a human would. Being "deterministic" doesn't preclude anything, it's more of a way of seeing things than the way things are.

Comment author: wedrifid 07 December 2009 04:25:40AM 3 points [-]

This is because the system of rules governing a video game and the system governing a deterministic universe appear to be very, very different.

An artificial dichotomy.

Comment author: Zack_M_Davis 07 December 2009 04:46:22AM 1 point [-]

I don't think so. Video games are specifically programmed to create a particular experience for the user. If something goes over the horizon and won't be needed again, it just doesn't get computed. Whereas the real universe seems to be---just the same physics. Everywhere. No complicated ad hoc programming describing levels or characters or points, or translating keypresses into useful actions---no user input at all, come to think of it.

Comment author: SilasBarta 07 December 2009 07:59:36PM *  3 points [-]

If something goes over the horizon and won't be needed again, it just doesn't get computed. Whereas the real universe seems to be---just the same physics. Everywhere.

Not quite. That's what we assume happens -- justifiably! -- because it would be a far more complicated hypothesis to disbelieve in the implied invisible.

However, failing to see these implied invisibles is not itself independent evidence of universal law, just an inference from an Occamian prior. You would fail to see implied invisibles with equal probability whether or not the laws were fully universal.

Interestingly, I explored the question of whether it's possible, if the universe is a simulation, to shut it down by forcing it to do more and more computational work in order to keep fooling us. But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.

Comment author: gwern 08 December 2009 04:22:58AM *  2 points [-]

But, I argue, it turns out that the 2nd law of thermodynamics implies that no matter what observations observers choose to make, it requires no more storage capacity to continue fooling them.

I read this, but I'm a little confused. Conceptually, as a closed system, the demand of universe is constant, sure, when I imagine it as something like the game of Life. Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?

Because if optimizations are applied, then it seems you can expand the necessary power by doing things that defeat the optimizations. Caches are ineffective if you keep generating intricately linked cryptographic junk, etc. One might think that no simulating agent would run a simulator whose worst-case requirements are beyond its abilities; but then, we humans routinely use QuickSort and don't mind our kernels over-committing memory...

(Incidentally, I made an estimation of my own for how small our substrate could be: http://www.gwern.net/Simulation%20inferences.html . I concluded that the simulating computer could be as small as a Planck cube.)

Comment author: SilasBarta 08 December 2009 05:02:33PM 1 point [-]

Are you assuming that any simulator will be a full and perfect emulator, with no optimizations like caches?

It doesn't rely on that assumption. It's just based on the fact that any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states.

The more states something could be in, from your perspective, the less information the simulator has to store to consistently represent it for you.

Comment author: Tyrrell_McAllister 08 December 2009 07:29:38PM *  1 point [-]

You make an interesting observation. I'm still trying to think it through, so I might not yet be making sense. But, right now, I have the following difficulty with accepting your argument.

Any simulation has "true" physical laws. These are just the rules that govern how in fact the simulation's algorithm unfolds, including all optimizations, etc.

However, we expect, a priori, the ultimate laws of reality to satisfy certain invariances. For example, perhaps we expect the ultimate laws to work identically at different points in real physical space. The true laws of the simulation might not satisfy such invariances with respect to the simulation. For example, the simulation's laws might not work identically at different points in the simulated physical space. [ETA: Optimization makes this likely. The simulation could evolve in a "chunkier" way far from us than it does close to us.]

So maybe this is how we can define what it means to hide the simulated nature of our universe from us: "Hiding the simulation" means "making our universe appear to us as though its laws satisfy all the expected invariances, even though they don't".

Here's the issue that I hope you address:

I'm convinced by your argument that "any time you destroy entropy by forcing some system, from your perspective, to be in fewer possible states, you also allow another system, from your perspective, to be in proportionally more possible states."

Say that, when I start out, system A could be in any one of the states in some state-set X. Then I learn about system B, and so, as you point out, system A could now be in any one of the states in some larger state-set Y, as far as I know.

But what if the larger state-set Y includes states that do not obey the expected invariances? And what if, as I learn more about the universe, the state-set that A's state must be in grows, all right, but eventually consists almost entirely of states that violate our expected invariances?

Wouldn't that amount to discovering the simulated nature of our universe? To avoid this discovery, wouldn't the simulators have to put more resources into making sure that A's set of possible states includes enough states that obey the expected invariances?

Comment author: SilasBarta 08 December 2009 08:22:32PM *  1 point [-]

Good point -- I've struggled with the same problem, in different terms. Let me know if my statement of the problem matches the point you're making here:

"It's possible to discover, not just particulars about individual systems, but universal laws. These universal laws put a constraint on all future observations, thus reducing the subjective entropy of the universe, without (apparently) needing any corresponding gain of entropy."

It's something I was wondering about when going over the E. T. Jaynes papers and Yudkowsky's Engines of Cognition.

I haven't gotten it resolved in terms of 2nd law and the "subjective entropy" idea, but I think I know how to resolve it in the context of the simulated universe question: basically, if the simulation starts out adhering to the invariances that have to be obeyed (even though they might be more than necessary to fool observers), then it is no additional burden for the observers to notice these invariances.

Though the observers have (apparently) violated the 2nd law -- and this is an area for further research -- the simulator was already expending the computational resources necessary to make the invariances hold. It is an exception to the general principle I derived, in that it's a case where net destruction of entropy requires no additional RAM.

I'm still working on how to resolve the remaining problems, but it shows how discovery of universal physical laws needn't be a problem for the simulator.

Comment author: pengvado 08 December 2009 10:07:34PM 0 points [-]

I'll try to bring your solution back to thermodynamics terms:

The universe always has and always will obey certain invariances, and those are a redundancy in your observations, which (along with any other redundancy that could possibly be derived) is already taken into account when computing information-theoretic entropy. If you had plenty of data already to derive the invariance but just hadn't previously noticed it, that lack of logical omniscience is why the 2nd law is an inequality. Including the invariance into your future predictions isn't a net reduction in entropy. It just removes some of the slack between the exact phase-volume preserving transforms of physics and the upper bounds that a computationally bounded agent has to use.

Comment author: Tyrrell_McAllister 08 December 2009 09:32:09PM 0 points [-]

Your restatement looks exactly right, and your solution would resolve the issue I raised.

One question is, how much optimization can the simulators do if the true laws are as invariant as they "ought to be"? For example, if the universe has to evolve according to the same rules everywhere, that would seem to keep it from evolving in a chunkier way far away from us, which closes off a potential way to save on computation.

Comment author: gwern 08 December 2009 07:00:49PM 1 point [-]

I vaguely see what you're getting at - every observation or interaction forces the simulator to calculate what you see, but also allows it to cheat in other areas. But I'm not sure how exactly this would work on the level of bits and programs?

Comment author: Vladimir_Nesov 08 December 2009 08:13:42PM 0 points [-]

This is a very conceptually interesting question.

Comment author: SilasBarta 08 December 2009 07:27:25PM *  0 points [-]

Bah! Implementation issue! :-P

At the level you're asking about (if I understand you correctly), the program can just reallocate the memory for whatever gained entropy, to whatever lost entropy.

Like in the comments section of my blog, if you learn the location of a ball, the program now has to store it as being in a definite location, but I also powered my brain to learn that, so the program doesn't have to be as precise in storing information about chemical bonds, which were moved to a higher entropy state.

Comment author: gwern 08 December 2009 07:37:08PM 0 points [-]

Spoken like a true theoretician. But it's hard to see an implementation that is optimal in exploiting this memory bound.

I mean, imagine that we have a pocket universe where we can have many numbers (particles?) which all must add up to 1000, and we have your normal programming types like bit, byte/int, integer etc.

If we start out with 1 1000, and then the 'laws of physics' begins dividing it by 10, (giving us 10 100s), how is the simulator going to be smart enough to take its fixed section of RAM and rewrite the single large 1000 integer into 10 smaller ints, and so on down to 1000 1s which could be single bits?

Is there any representation of the universe's state which achieves these tricks automatically, or does the simulation really just have to include all sorts of conditionals like 'if (changed? x), then if x > 128, convert x Integer; x <= 128 && > 1, convert x int; else convert x bit' in order to preserve the constant-memory usage?

Comment author: Jack 07 December 2009 09:30:18PM *  2 points [-]

When I was first introduced to quantum mechanics my professor taught us the Copenhagen Interpretation. I was immediately reminded of occasional moments in video games where features of a room aren't run until the player gets to the room. It seemed to me that only collapsing the wave function when it interacted with a particular kind of physical system (or a conscious system!) would be a really good way to conserve computing power and that it seemed like the kind of hack programmers in an fully Newtonian universe might use to approximate their universe without having to calculate the trajectories of a googolplex (ed) subatomic particles.

Can anyone tell me if this actually would save computing power/memory?

Comment author: SilasBarta 07 December 2009 09:48:49PM *  2 points [-]

The answer basically comes down to the issue of saving on RAM vs. saving on ROM. (RAM = amount of memory need to implement the algorithm, ROM = amount of memory needed to describe the algorithm)

Video game programmers have to care about RAM, while the universe (in its capacity as a simulator) does not. That's why programmers generate only what they have to, while the universe can afford to just compute everything.

However, I asked the same question, which is what led to the blog post linked above, where I concluded that you wouldn't save memory by only doing the computations for things observers look at: first, because they check for consistency and come back to verify that the laws of physics still work, forcing you to generate the object twice.

But more importantly (as I mentioned) because the 2nd law of thermodynamics means that any time you gain information about something in the universe, you necessarily lose just as much in the process of making that observation (for a human, it takes the form of e.g. waste heat, higher-entropy decomposition of fuels). So by learning about the universe through observation, you simultaneously relieve it of having to store at least as much information (about e.g. subatomic particles).

(This argument has not been peer-reviewed, but was based on Yudkowsky's Engines of Cognition post.)

Comment author: matt 07 December 2009 10:53:46PM 2 points [-]

googleplex = Google Inc's HQ

googolplex = 10^(10^100)

Comment author: Blueberry 08 December 2009 12:16:43AM -2 points [-]

It's truly sad now how people are less familiar with the original spelling and meaning of a googol. Now the first thing we think of is the search engine, instead of 10^100.

Comment author: gwern 08 December 2009 06:52:19PM 5 points [-]

Is that really so sad? googol was named in jest and I do not think I have ever seen it seriously needed for anything; Google on the other hand...

Comment author: pengvado 08 December 2009 06:59:36AM 1 point [-]

Assuming they don't make any approximations other than collapse, yes a classical computer simulating Copenhagen takes fewer arithmetic ops than simulating MWI. At least until someone in the simulation builds a sufficiently large coherent system (quantum computer), at which point the simulator has to choose between forbidding it (i.e. breaking the approximation guarantee) or spending exponentially many arithmetic ops.

Copenhagen (even in the absence of large coherent subsystems) does not take significantly less memory than MWI: both are in PSPACE.

Otoh, if the simulator is running on quantum-like physics too, then there's no asymptotic difference in arithmetic either. And if you're not going to assume that the simulator's physics is similar to ours, who says it's less rather than more computationally capable?

Comment author: Baughn 07 December 2009 07:43:44PM 3 points [-]

If you implemented the laws of physics on a computer, using lazy evaluation, then whatever is "over the horizon" from the observer process(es) would not be computed.

However, this would not in the least be observable from inside the system. If the observer moved to serve you, your past would be "retroactively" computed.

I'm not claiming this is very likely to be the case, since at the very least it requires an additional agent - the observer process - to cause anything to happen at all, but lazy evaluation isn't some weird ad-hoc concept; it's a basic concept in computer science that also happens to make programs shorter, a lot of the time.

Hopefully not sufficiently shorter that a universe using lazy evaluation with one random point in space somewhere as the observer is less complex than one using strict evaluation. That.. would be impossible for us to detect, of course, but I believe it'd still have consequences.

Comment author: NancyLebovitz 07 December 2009 10:58:03AM 2 points [-]

If the universe we're living in is a work of art or a game, it's made for minds with much greater processing power than we've got. It isn't obvious that they'd be satisfied with something as crude as a video game.

Comment author: Baughn 07 December 2009 07:46:07PM 4 points [-]

How about a video game where you attempt to control a pre-singularity global civilization by directly playing a few thousand randomly selected humans simultaneously, while not letting this fact be noticed by the NPCs?

It's interesting to wonder what sort of games post-humans might play, though I hope it won't be anything quite that ethically objectionable.

Comment author: wedrifid 08 December 2009 01:47:50AM *  1 point [-]

It's interesting to wonder what sort of games post-humans might play, though I hope it won't be anything quite that ethically objectionable.

Or, from the perspective of a pre-post-human, quite that dull. If I am going to play that kind of sim I'm going to pick the 'elves' faction.

Comment author: Baughn 10 December 2009 12:35:03PM *  1 point [-]

Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.

Comment author: wedrifid 10 December 2009 12:54:58PM 0 points [-]

Considering that there exist fork-lift simulation games, I hesitate to claim that anything is too dull to be made.

You're serious? That scares me.

Comment author: Baughn 10 December 2009 12:56:09PM 0 points [-]

I think it was originally meant for training, but yes. People play it. As a game.

http://www.youtube.com/watch?v=HIVFjtZzDr8

Comment author: Lightwave 08 December 2009 09:35:49AM 1 point [-]

It could be that it was the elves who picked the 'humans' faction.

Comment deleted 07 December 2009 05:16:20AM [-]
Comment author: Pavitra 07 December 2009 05:22:15AM 0 points [-]

You mean, besides the predictive power of the mathematical formalizations of Occam's Razor, as opposed to a linguistic or pathetic formulation?

The universe looks very falsifiably like a computer program.

Comment author: Psychohistorian 07 December 2009 05:20:12AM 0 points [-]

If you can understand how the two are truly the same, you are far wiser than anyone I've ever met, and I would very much like to subscribe to your newsletter. I hope thefirst issue explains how this dichotomy is invalid.

Comment author: wedrifid 07 December 2009 05:37:59AM 3 points [-]

A video game can be deterministic or not in the same way any other kind of universe can. "Video game" vs "deterministic" is just a silly comparison. I don't know what word to use in place of 'deterministic', I just don't think that one is the right one.

Comment author: Blueberry 07 December 2009 06:40:08AM 2 points [-]

I'm thinking "algorithmic". That is, the universe, or a video game, follows a certain algorithm to determine what happens next, whether the algorithm is the laws of physics or a computer program. Algorithms aren't necessarily deterministic: we could have a step for "generate a truly random (quantum) number".

Comment author: Jack 06 December 2009 07:58:20PM 0 points [-]

Huh? "Metaphysics" refers to an incredibly wide variety of claims. But I'd say that metaphysics tries to answer questions about reality that aren't the kind of questions that can be answered by experimental science. Since we lack a good method for answering these questions our confidence in metaphysical claims is usually substantially lower than it is for empirical claims. But why should we think all metaphysical questions are radically different from scientific questions such that the answer to one can't influence our estimations of the other? Of hand I can't think of a number of metaphysical hypotheses that have been greatly effected by scientific knowledge and vice versa-- materialism, substance dualism, determinism and indeterminism, free will, eternalism and philosophies of time etc.

In this case it seems rather obvious that if we are "living in the Matrix" the probability that the basic laws of physics are complicated rather than simple is dramatically higher.

Comment author: Pablo_Stafforini 06 December 2009 11:25:32PM 0 points [-]

I never denied that a our assessment of an empirical claim may be influenced by the metaphysical views we hold. I simply noted that, once the Matrix hypothesis is understood as a metaphysical hypothesis, it is unclear why believing that we live in the Matrix should increase our credence in the various claims of parapsychology.

Comment author: Jack 06 December 2009 11:45:37PM 0 points [-]

I have no idea what your argument actually is. Why does it matter whether or not the Matrix hypothesis is a metaphysical hypothesis?

Comment author: Pablo_Stafforini 07 December 2009 12:24:17AM *  2 points [-]

My original comment was a reply to Mitchell Porter, who suggested that parapsychology would somehow receive support from the Matrix hypothesis. I replied by saying that this would not be true, or at least not clearly, if that hypothesis is understood as a claim about the ultimate nature of reality.

To take another example, suppose someone argued Berkeleyan idealists should be more open to psychic phenomena, since we are all ideas in the mind of God. I would reply that this is not so, since the fact that the world is ultimately made of mind has in itself no implications about whether certain kinds of mental phenomena take place within that world.

Comment author: Jack 07 December 2009 01:04:20AM 1 point [-]

The ability of certain collections of atoms to communicate large amounts of information to other collections of atoms over vast distances without there being any detectable emissions is an incredibly complex power. Complex entities are a priori improbable compared to simple entities. You need some kind of creation mechanism to make them probable... with biological systems we have evolution, with pocket watches and jet planes you have human inventors. If you accept a metaphysical hypothesis that involves an intelligence creating the universe-- programmers or God-- you have a mechanism for making complex entities probable. That is why the Matrix hypothesis makes psychic phenomena more likely

Comment author: wedrifid 07 December 2009 01:39:59AM *  2 points [-]

Complex entities are a priori improbable compared to simple entities.

This remains the case no matter what the universe is made of. All evidence suggests that psychic mechanisms are not available to us in our current mode of existence, whatever that may be. That evidence doesn't change until you get more.

At least, that seems to me to be the point ben is making.

Comment author: Jack 07 December 2009 01:58:16AM *  1 point [-]

Complex entities are a priori improbable compared to simple entities.

This remains the case no matter what the universe is made of.

Yes, but not no matter how the universe was created. The Matrix hypothesis includes a claim that the universe was created by some intelligence and that makes psychic phenomena substantially more plausible.

That doesn't mean all religious people have to believe in psychic phenomena or even that they should. If there is no evidence for psychic phenomena then there is not evidence for psychic phenomena. But if you think the universe was created claims of psychic phenomena should be less absurd on their face.

Comment author: wedrifid 07 December 2009 02:16:26AM *  1 point [-]

Yes, but not no matter how the universe was created. The Matrix hypothesis includes a claim that the universe was created by some intelligence and that makes psychic phenomena substantially more plausible.

Another position that could be taken is "the evidence suggests that if we are living in a matrix scenario then it is probably one of the ones without matrix psychic powers". That is, assuming rational reasoning without granting a counter-factual premise. The evidence can then be considered to have a fixed effect on the probability of psychic powers. Whether it causes you to also lower your probability for a Matrix or to alter your description of probable Matrix type would be considered immaterial.

Again, this is just what my impression of Ben's position. He'll correct me if I'm wrong. For my part I don't care about Matrixes (especially No. 2. I walked out of that one in disgust! Actually, I do care about the 'dodge this!' line. It's infuriating.)

Comment author: Jack 07 December 2009 05:23:32AM *  0 points [-]

This all started when Michell Porter responded to Blueberry's claim that we know, intuitively, that psychic phenomena are just not possible. I'm not quite sure I know just what Blueberry was talking about. But his estimate of the existence of psychic phenomena was zero and not just because of parapsychology's failure to provide convincing evidence but because of our understanding of the world. Mitchell provides Blueberry with a hypothesis that is consistent with what we know about the world but under which the existence of psychic phenomena is not prohibitively improbable.

None of this changes the fact that finding evidence of psychic phenomenon should cause us to revise our probabilities of its existence up and that non finding evidence should cause us to revise our probabilities down. But if your probability is zero, and especially if your probability is zero for reasons other than the failure of parapsychology, a hypothesis with P>0 where P(psi) is >0 looks like information you needs to update on.

Ben says it isn't clear why this is so. Well creation makes complex, unselected entities more probable. But maybe I should wait to have this argument with him.

As far as the movie goes, it is all downhill right after Neo wakes up gooey pink tub and sees all the other people hooked into the Matrix. The whole movie should have taken place in the Matrix and kept us in the dark about what it really was until the very end. Would have been way cooler that way.