Today's post, Bell's Theorem: No EPR "Reality" was originally published on 04 May 2008. A summary (taken from the LW wiki):

 

(Note: This post was designed to be read as a stand-alone, if desired.) Originally, the discoverers of quantum physics thought they had discovered an incomplete description of reality - that there was some deeper physical process they were missing, and this was why they couldn't predict exactly the results of quantum experiments. The math of Bell's Theorem is surprisingly simple, and we walk through it. Bell's Theorem rules out being able to locally predict a single, unique outcome of measurements - ruling out a way that Einstein, Podolsky, and Rosen once defined "reality". This shows how deep implicit philosophical assumptions can go. If worlds can split, so that there is no single unique outcome, then Bell's Theorem is no problem. Bell's Theorem does, however, rule out the idea that quantum physics describes our partial knowledge of a deeper physical state that could locally produce single outcomes - any such description will be inconsistent.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Entangled Photons, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New to LessWrong?

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 1:43 AM

A terminological correction:

Bell's Theorem itself is agreed-upon academically as an experimental truth.

A theorem is a deductive truth. Only formally-proved mathematical results get to be called "theorems". As such, a "theorem" can never be falsified by experiment.

The confusion is revealed a bit earlier in the post:

"Bell's inequality" is that any theory of hidden local variables implies (1) + (2) >= (3). The experimentally verified fact that (1) + (2) < (3) is a "violation of Bell's inequality". So there are no hidden local variables. QED.

And that's Bell's Theorem.

Eliezer evidently thinks that "Bell's inequality" and "Bell's theorem" are two different things. They're not. The theorem -- the only thing that can be called such -- is the mathematical statement that "any theory of hidden local variables implies (1) + (2) >= (3)". The statement that "there are no hidden local variables [in the world as it actually exists]" is not purely mathematical -- an empirical premise ("the experimentally verified fact that (1) + (2) < (3)") was used to deduce it. -- and therefore cannot be labeled a "theorem".

Actually, I lied slightly: there is a subtle difference between "Bell's inequality" and "Bell's theorem". "Bell's inequality" is just the bare formula "A+ B >= C" (where A,B, and C are whatever they are in the context), without the accompanying assertion that the inequality actually holds. "Bell's theorem", by contrast, is the statement that if A,B, and C are numbers satisfying [whatever conditions they are asserted to satisfy], then Bell's inequality is true. As such, while it makes sense to speak of Bell's inequality being "violated" (as it is for some triples of numbers), it does not make any logical sense to speak of "violations" of Bell's theorem.

A theorem can never be violated. When people say, for example, "the Pythagorean theorem is violated in spherical geometry", they are merely abusing language out of laziness. What they mean to say is that the conclusion of the Pythagorean theorem is violated. Strictly speaking, the Pythagorean theorem is not the statement that "the square of the hypotenuse is equal to the sum of the squares of the other two sides"; it is, rather, the statement "if a triangle is Euclidean and right-angled, then the square of the hypotenuse is equal to the sum of the squares of the other two sides." The hypotheses of the theorem are not satisfied in spherical geometry in the first place, so spherical geometry does not "falsify" it or present "exceptions" to it. (Indeed, it exemplifies it as much as Euclidean geometry does, via the contrapositive: because the square of the hypotenuse of a spherical triangle does not equal the sum of the squares of the other two sides, we know, by the Pythagorean theorem, that spherical triangles are not Euclidean and right-angled!)

[-]V_V12y00

So there are no hidden local variables

This is incorrect.

The violation of Bell's inequalities rules out only theories where the state is completely local. Theories where the the state is composed by both global and local variables, such as Bohmian mechanics, are not ruled out.

Is there any place we can see actual data on one of the photon polarization experiments? Not statistics, but actual data? And a probabilistic analysis, a la Jaynes, of the data?

In theory, I'm fine with non local interactions, but I'm not yet convinced they are necessary. I don't see anything about detector efficiency here, which I believe is key to the reality of what happens. It seems completely natural to me that if a photon had some directional local variable, that it would effect the likelihood of detection in a polarizer dependent on the direction of the polarizer.

Jaynes has reservations about Bell's Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.

Jaynes paper on EPR and Bell's Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf

Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf

Jaynes has reservations about Bell's Theorem, and they made a fair amount of sense to me. And in general I find it good policy to trust him on how to properly interpret probabilistic reasoning.

If you're going to use an authority heuristic, at some point you also have to apply the heuristic "what does pretty much everyone else think?"

My impression is that most people take for granted that Bell was correct, and consider it a done deal. Another impression is that "pretty much everyone else" mistakenly takes ontological randomness as a conceptual given on a macro level, and there has yet to be conclusive evidence (see detector efficiency) that ontological randomness operates on a micro level.

I'm not saying he is right. I'm saying that I haven't seen any better probabilistic analysis of the issue than what I've seen from Jaynes, and the evidence so far doesn't conclusively prove him wrong.

Well, maybe my complaint about authority is just be hindsight talking. This is because it's not like entanglement has never again been part of scientific research - quantum computers are made of the stuff. Electrons are just not classical objects.

And I think that, if we treat the universe as based on causality (a la Judea Pearl), the hidden variable route ( P(A | B a b) = P(A | a b) ) really is the only relativistic one, if we avoid many worlds. There are three ways for events to be linked: direct causally linked (faster than light), both descendants of a node we know about (hidden variable), or both ancestors of a node we know about (faster than light).

Jaynes is misunderstanding the class of hidden-variable theories Bell's theorem rules out: the point is that the hidden variables λ would determine the outcome of measurements, i.e. P(A|aλ) is 0 for certain values of λ and 1 for all other values, and likewise for P(B|bλ), in which case P(A|abλ) must equal P(A|aλ), P(B|Aabλ) must equal P(B|bλ), and eq. 14 does equal eq. 15. (I had noticed this mistake several years ago, but I didn't know whom to tell about.)

Good catch! Jaynes does not seem to restrict the local hidden variables models to just the deterministic ones, but allows probabilistic ones, as well. This seems to defeat the purpose of introducing hidden variables to begin with. Or maybe I misunderstand what he means.

My recollection is that Jaynes deals with this point. He discusses in particular time varying lambda (or I'd say maybe space-time varying lambda). As a general proposition, I don't know how you could ever rule out a hidden variable theory with time variation faster than your current ability to measure.

He has another paper, where he speculates about the future of quantum theory, and talks about phase versus carrier frequencies, and suggests that phase may be real and could determine the outcome of events.

The obvious way to get "random" detection probability deterministically would be the time varying dependency on the interaction of photon polarization, phase of the wavefront, and detector direction.

If you'd like to discuss this in more detail, I'd keep this thread alive for a while, as it's an issue I'd like to clear up for myself.

(I'll look up the paper when I have more time. EDIT - paper put in first post.)

Very cool paper. But I couldn't understand the most important point. Can anyone help? When Jaynes says that (15) is the correct factorization instead of Bell's (14), he gives up something, and I don't understand what it is. What are the spooky conclusions that mainstream physicists wanted to avoid by working with (14) instead of (15)? I understand Jaynes' point about Bell's hidden assumptions (1) (bottom of page 12), and I agree with it. But I don't understand what he says about hidden assumption (2).

We haven't yet conclusively demonstrated non-locality, without making assumptions about detector behavior. (i.e., sophisticated adversarial detectors could replicate all data observed so far, even in a classical universe, by selectively dropping data points).

The state of affairs may change relatively soon, if we successfully design experiments that violate classical locality even given unreliable detectors. I'd bet that we will.

That was my understanding as well. Has anyone worked out complications in detection (detector bias) that would be required to account for the data?