# [SEQ RERUN] Bell's Theorem: No EPR "Reality"

2 25 April 2012 04:37AM

Today's post, Bell's Theorem: No EPR "Reality" was originally published on 04 May 2008. A summary (taken from the LW wiki):

(Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.

Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Entangled Photons, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Sort By: Popular
Comment author: 26 April 2012 01:43:59AM *  5 points [-]

A terminological correction:

Bell's Theorem itself is agreed-upon academically as an experimental truth.

A theorem is a deductive truth. Only formally-proved mathematical results get to be called "theorems". As such, a "theorem" can never be falsified by experiment.

The confusion is revealed a bit earlier in the post:

"Bell's inequality" is that any theory of hidden local variables implies (1) + (2) >= (3). The experimentally verified fact that (1) + (2) < (3) is a "violation of Bell's inequality". So there are no hidden local variables. QED.

And that's Bell's Theorem.

Eliezer evidently thinks that "Bell's inequality" and "Bell's theorem" are two different things. They're not. The theorem -- the only thing that can be called such -- is the mathematical statement that "any theory of hidden local variables implies (1) + (2) >= (3)". The statement that "there are no hidden local variables [in the world as it actually exists]" is not purely mathematical -- an empirical premise ("the experimentally verified fact that (1) + (2) < (3)") was used to deduce it. -- and therefore cannot be labeled a "theorem".

Actually, I lied slightly: there is a subtle difference between "Bell's inequality" and "Bell's theorem". "Bell's inequality" is just the bare formula "A+ B >= C" (where A,B, and C are whatever they are in the context), without the accompanying assertion that the inequality actually holds. "Bell's theorem", by contrast, is the statement that if A,B, and C are numbers satisfying [whatever conditions they are asserted to satisfy], then Bell's inequality is true. As such, while it makes sense to speak of Bell's inequality being "violated" (as it is for some triples of numbers), it does not make any logical sense to speak of "violations" of Bell's theorem.

A theorem can never be violated. When people say, for example, "the Pythagorean theorem is violated in spherical geometry", they are merely abusing language out of laziness. What they mean to say is that the conclusion of the Pythagorean theorem is violated. Strictly speaking, the Pythagorean theorem is not the statement that "the square of the hypotenuse is equal to the sum of the squares of the other two sides"; it is, rather, the statement "if a triangle is Euclidean and right-angled, then the square of the hypotenuse is equal to the sum of the squares of the other two sides." The hypotheses of the theorem are not satisfied in spherical geometry in the first place, so spherical geometry does not "falsify" it or present "exceptions" to it. (Indeed, it exemplifies it as much as Euclidean geometry does, via the contrapositive: because the square of the hypotenuse of a spherical triangle does not equal the sum of the squares of the other two sides, we know, by the Pythagorean theorem, that spherical triangles are not Euclidean and right-angled!)

Comment author: 25 August 2012 04:33:16PM 0 points [-]

So there are no hidden local variables

This is incorrect.

The violation of Bell's inequalities rules out only theories where the state is completely local. Theories where the the state is composed by both global and local variables, such as Bohmian mechanics, are not ruled out.

Comment author: 25 April 2012 06:24:26AM *  3 points [-]

Is there any place we can see actual data on one of the photon polarization experiments? Not statistics, but actual data? And a probabilistic analysis, a la Jaynes, of the data?

In theory, I'm fine with non local interactions, but I'm not yet convinced they are necessary. I don't see anything about detector efficiency here, which I believe is key to the reality of what happens. It seems completely natural to me that if a photon had some directional local variable, that it would effect the likelihood of detection in a polarizer dependent on the direction of the polarizer.

Jaynes has reservations about Bell's Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.

Jaynes paper on EPR and Bell's Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf

Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf

Comment author: 25 April 2012 01:13:05PM *  2 points [-]

Jaynes is misunderstanding the class of hidden-variable theories Bell's theorem rules out: the point is that the hidden variables λ would determine the outcome of measurements, i.e. P(A|aλ) is 0 for certain values of λ and 1 for all other values, and likewise for P(B|bλ), in which case P(A|abλ) must equal P(A|aλ), P(B|Aabλ) must equal P(B|bλ), and eq. 14 does equal eq. 15. (I had noticed this mistake several years ago, but I didn't know whom to tell about.)

Comment author: 25 April 2012 06:51:42PM 1 point [-]

Good catch! Jaynes does not seem to restrict the local hidden variables models to just the deterministic ones, but allows probabilistic ones, as well. This seems to defeat the purpose of introducing hidden variables to begin with. Or maybe I misunderstand what he means.

Comment author: 25 April 2012 06:22:43PM *  0 points [-]

My recollection is that Jaynes deals with this point. He discusses in particular time varying lambda (or I'd say maybe space-time varying lambda). As a general proposition, I don't know how you could ever rule out a hidden variable theory with time variation faster than your current ability to measure.

He has another paper, where he speculates about the future of quantum theory, and talks about phase versus carrier frequencies, and suggests that phase may be real and could determine the outcome of events.

The obvious way to get "random" detection probability deterministically would be the time varying dependency on the interaction of photon polarization, phase of the wavefront, and detector direction.

If you'd like to discuss this in more detail, I'd keep this thread alive for a while, as it's an issue I'd like to clear up for myself.

(I'll look up the paper when I have more time. EDIT - paper put in first post.)

Comment author: 25 April 2012 11:35:15AM 2 points [-]

Jaynes has reservations about Bell's Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.

If you're going to use an authority heuristic, at some point you also have to apply the heuristic "what does pretty much everyone else think?"

Comment author: 25 April 2012 06:42:01PM 1 point [-]

My impression is that most people take for granted that Bell was correct, and consider it a done deal. Another impression is that "pretty much everyone else" mistakenly takes ontological randomness as a conceptual given on a macro level, and there has yet to be conclusive evidence (see detector efficiency) that ontological randomness operates on a micro level.

I'm not saying he is right. I'm saying that I haven't seen any better probabilistic analysis of the issue than what I've seen from Jaynes, and the evidence so far doesn't conclusively prove him wrong.

Comment author: 26 April 2012 12:55:47AM *  1 point [-]

Well, maybe my complaint about authority is just be hindsight talking. This is because it's not like entanglement has never again been part of scientific research - quantum computers are made of the stuff. Electrons are just not classical objects.

And I think that, if we treat the universe as based on causality (a la Judea Pearl), the hidden variable route ( P(A | B a b) = P(A | a b) ) really is the only relativistic one, if we avoid many worlds. There are three ways for events to be linked: direct causally linked (faster than light), both descendants of a node we know about (hidden variable), or both ancestors of a node we know about (faster than light).

Comment author: 25 April 2012 08:56:32AM *  1 point [-]

We haven't yet conclusively demonstrated non-locality, without making assumptions about detector behavior. (i.e., sophisticated adversarial detectors could replicate all data observed so far, even in a classical universe, by selectively dropping data points).

The state of affairs may change relatively soon, if we successfully design experiments that violate classical locality even given unreliable detectors. I'd bet that we will.

Comment author: 25 April 2012 06:29:32PM 1 point [-]

That was my understanding as well. Has anyone worked out complications in detection (detector bias) that would be required to account for the data?

Comment author: 25 April 2012 09:35:01AM *  0 points [-]

Very cool paper. But I couldn't understand the most important point. Can anyone help? When Jaynes says that (15) is the correct factorization instead of Bell's (14), he gives up something, and I don't understand what it is. What are the spooky conclusions that mainstream physicists wanted to avoid by working with (14) instead of (15)? I understand Jaynes' point about Bell's hidden assumptions (1) (bottom of page 12), and I agree with it. But I don't understand what he says about hidden assumption (2).

Comment author: 25 April 2012 02:51:24PM -3 points [-]

It's really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size. Can you create a logically consistent belief set such that the FTL particles are not FTL and really just existing in multiple states at once? Yes.

You can also say that on 4/25/12, up is down and down is up so I fell up and couldn't get back down again.

IE there are infinite labeling systems for every set of observations. The minimal set has the least computational cost to consider, and thus is easier for people to process. Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.

Comment author: 25 April 2012 05:32:14PM 4 points [-]

Could you unpack that a little more? It sounds like you're saying that 'some people' are unfairly discounting the possibility that QM is incomplete and locality is violated, for reasons that are not logically required . Is that accurate?

If so, I would like to point out that computational cheapness is not a good prior. It's extremely computationally cheaper to believe that our solar system is the only one and the other dots are simulated, coarse-grained, on a thin shell surrounding its outside. It simplifies the universe to a mind-boggling degree for this to be the case. Indeed, we should not stop there. It is best if we get rid of the interior of the sun, the interior of the earth, the interior of every rock, trees falling in the forest, people we don't know... people we do know... and replace our interactions with them with simulacra that make stuff up and just provide enough to maintain a thin veneer of plausibility.

The rule set to implement such a world is HUGE, but the data and computational complexity is enough smaller to make up for it.

Don't you think?

Comment author: 01 May 2012 11:05:26PM *  0 points [-]

However, you've no evidence that you're not a Boltzmann brain. You choose to accept on faith that you are not and, desiring to be consistent and even-handed, you further choose to accept on faith that the entire visible universe is just as complex as it seems to be (which would likely be false if e.g. we're in a simulation).

You point out that adopting such priors requires biting an unpleasant bullet. This is not a reason for someone not to adopt it and indeed bite the bullet. The real reason is purely psychological: people don't want to accept a Boltzmann prior, they're not built that way.

Of course I write this from the POV of someone who does not accept the Boltzmann prior. From the POV of someone who does, time itself does not properly exist - or at least they always expect to cease coherently thinking in the few seconds with overwhelming probability - so an explanation based on psychology is problematic since psychology takes time to happen in a brain...

Comment author: 27 April 2012 01:25:51AM -1 points [-]

The cheapest approach is to fail to differentiate between different labeling systems that conform to all known observations. In this way, you stick to just the observations themselves.

Conventional interpretation of the Bell experiments violates this by implying c as a universal speed barrier. There is no evidence that such a barrier applies to things we have no experience of.

Comment author: 30 April 2012 01:19:57AM 0 points [-]

I have no wish to defend the 'standard' interpretation, whatever that is - but if you stick just to the observations themselves and provide no additional interpretation, then you are passing up an opportunity for massive compaction by way of explanation.

Moreover, supposing that the c limit only applies to the things we can see implies adding rules that go very far from sticking just to the observations themselves.

Comment author: 25 April 2012 06:31:51PM 3 points [-]

It's really simple. The hidden variables are not local. General Relativity does not apply in the case of the particles below a certain size.

I assume that this is your personal model, given the lack of references. Feel free to flesh it out so that it makes new quantifiable testable predictions.

Some people however, tribals to be specific, are more interested in protecting legacies than they are with using the computationally cheaper belief set. The cost is reduced frequency of new inspirations of understanding.