In general, good explanation. You've infected me with the Born virus though, and now I cannot rest or be sane until I actually know the correct explanation. Well, I can sort of rest, but I wake up still frustrated at my lack of knowing, and so far unable to work it out on my own. :)
Aaanyways, since you're on the subject of some of the sorts of entanglement that "look wierd" from way up here in classical illusion land, mind talking a bit about quantum teleportation?
Psy-Kosh, when you work it out tell the rest of humanity :)
I do find it intuitive that 'worlds' would interfere with each other so that if one found a photon in an unusual situation then the world/alternate photons would 'resist' that in some way. But I don't know enough of the details to propose a theory that I would not be confident could be shot down.
I don't mind non-local formula as long as they don't allow the sending of faster than light signals by humans. I always kind of want the speed of light to be like the speed of a glider in the game of life.
But you can send signals faster than the glider. They only move at c/4. There are spaceships that can move at c/2, and fuses and wicks that can move at c.
Unlike our universe, the refractive index of non-vacuum parts of lifespace is less than 1 wrt vacuum. c/2 is the orthogonal speed of light in vacuum, and c/4 is the diagonal speed of light in vacuum.
Olivier Costa De Beauregard claims to have found out an interpretation of the EPR paradox (implying retro-causality, that is, with the information going in the past and then in the future, doing a kind of temporal zig-zag) that is compatible with Einstein's science. Have you digged that? Thanks, Chris Masse
I think you read something which left out something; Belle's Theorem disproved "neo-realism," which is the idea that there was a classical-physics explanation, i/e, with real particles with real properties. It's the model EPR was trying to assert over the Copenhagen interpretation - and that, indeed, was its only purpose, and I find it odd that you bring that thought experiment up out of the context of its intent.
Well, actually, Everette's Many-Worlds actually repermits classical physics within its confines, and hence real particles, as do other superdimensional interpretations - within his model, you're still permitted all the trappings of classical physics. (As they break an assumption of normality in Belle's Theorem, namely, that there is only one universe, or in the case of superdimensionality, that the universe doesn't extend in other directions we can only detect abstractly.)
Chris, that possibility has been mooted several times, but no-one has ever made it work in detail, in a way that truly eliminates the mystery. For example, one might hope to show that the probability amplitude framework derives somehow from ordinary conditional probability in a temporally bidirectional framework (simultaneously conditioning upon events in the past and in the future, perhaps). But this has not been done. For a while I thought John Cramer's transactional interpretation might have achieved this, but if you look at his technical work, he's still employing the same sum-over-histories framework complete with complex numbers; the only difference is that he uses the time-symmetric action of Feynman and Wheeler to derive the amplitudes.
A lesser-known example is Mark Hadley, who wants to derive quantum mechanics from classical general relativity by way of microscopic closed timelike curves. He has an argument that this produces the qualitative features of quantum mechanics (such as incompatible observables) and that this in turn will necessitate the specific dynamical form of the theory. Certainly, if you imagine CTCs showing up at a constant rate per unit space-time volume, there would be scope for De Beauregard's zigzag causality to be taking place. But I think something's missing from Hadley's derivation, though I'd have to revisit it to be sure.
late comment, I was on vacation for a week, and am still catching up on this deep QM thread.
Very nice explanation of Bell's inequality. For the first time I'm fully grokking how hidden variables are disproved. (I have that "aha" that is not going away when I stop thinking about it for five seconds). My first attempt to figure out QM via Penrose, I managed to figure out what the wave function meant mathematically, but was still pretty confused about the implications for physical reality, probably in similar fashion to physicists of the 30s and 40s, pre Bell. I got bogged down and lost before getting to Bell's, which I'd heard of, but had trouble believing. Your emphasis on configurations and the squared modulus business and especially focusing on the mathematical objects as "reality", while our physical intuitions are "illusions" was important in getting me to see what's going on.
Of course the mathematical objects aren't reality anymore than the mathematical objects representing billiard balls and water waves are. But the key is that even the mathematical abstractions of QM are closer to the underlying reality than what we normally think of as "physical reality", i.e. our brain's representation thereof.
The GHZ state might be a better illustration, since it doesn't have the inherent probabilistic elements of the EPR/Bell state.
I have to say that the sequence on Quantum Mechanics has been awfully helpful so far, especially the stuff on entanglement and decoherence. Bell's Theorem makes a lot more sense now.
Perhaps one helpful way to get around the counterintuitive implications of entanglement would be to say that when one of the experimenters "measures the polarisation of photon A", they're really measuring the polarisation of both A and B? Because A and B are completely entangled, with polarisations that must be opposite no matter what, there's no such thing as "measuring A but not measuring B". A and B may be "distinct particles" (if distinct particles actually existed), but for quantum mechanical purposes, they're one thing. Using a horizontal-vertical basis, the system exists in a combination of four states: "A horizontal, B horizontal", "A horizontal, B vertical", "A vertical, B horizontal", "A vertical, B vertical". But because of the physical process that created the photons, the first and fourth components of the state have amplitude zero. On a quantum level, "measuring the polarisation of A" and "measuring the polarisation of B" mean exactly the same thing - you're measuring the state of the entangled system. The two experimenters always get the same result because they're doing the same experiment twice.
(Of course, when I say "measure the thing", I mean "entangle your own state with the state of the thing".)
After all, most practical experiments involve measuring something other than the actual thing you want to measure. A police radar gun doesn't actually measure the speed of the target car, it measures the frequency of a bunch of microwave photons that come back from the target. Nobody (especially not a policeman) would argue that you aren't "really" measuring the car's speed. Imagining for a moment that the car had any kind of macroscopic spread in its velocity amplitude distribution, the photons' frequency would then be entangled with the car's velocity, in such a way that only certain states, the ones where the car's velocity and the photons' frequency are correlated according to the Doppler effect, have any real amplitude. Thus, measuring the photons' frequency is exactly the same thing as measuring the car's velocity, because you're working with entangled states.
If, on the other hand, the pair of photons were produced by a process that doesn't compel opposite polarisations (maybe they produce a pair of neutrinos, or impart some spin to a nearby nucleus), then the four states mentioned above (A-hor B-hor, A-hor B-vert, A-vert B-hor, A-vert B-vert) all have nonzero amplitude. In this situation, measuring the polarisation of A is not an experiment that tells you the state of the system - only measuring both photons will do that.
You actually get neater numbers if you take 0°, 30° and 60°. Then the probabilities are 1/8,1/8 and 3/8. :)
I'm always disappointed by discussions of EPR and Bell's Theorem.
Basically, everyone just starts from the assumption that Bell's probabilistic analysis was correct, ignores the detector efficiency and fair sampling loopholes, fails to provide provide real experimental data, and then concludes as EY does:
In conclusion, although Einstein, Podolsky, and Rosen presented a picture of the world that was disproven experimentally
My understanding is that due to the detector efficiency and fair sampling loopholes, what EY says here is flat out false. EPR has yet to be disproven experimentally.
As usual, I'm with Jaynes:
But how do you know, asks Jaynes, that you won't find a way to predict the process tomorrow?
Indeed. The data doesn't even rule it out yet.
Further, how do you know that there isn't an erroneous assumption in Bell's work? Science has always been full of those, and progress is made when erroneous assumptions are identified and abandoned. Should we really be 100% certain that there was no mistaken probabilistic assumptions in his work here at the Bayesian Conspiracy, where we poo poo the frequentist probabilistic assumptions which were standard at the time Bell did his work?
Jaynes has reservations about the probabilistic analysis behind Bell's Theorem, and they made a fair amount of sense to me.
Jaynes paper on EPR and Bell's Theorem: http://bayes.wustl.edu/etj/articles/cmystery.pdf
Jaynes speculations on quantum theory: http://bayes.wustl.edu/etj/articles/scattering.by.free.pdf
Jaynes actually also had correspondence with Everett, in 1957, and supposedly even sent him a letter reviewing a short version of Everett's thesis.
I don't have a copy of that, but int their other correspondence they seem to be talking more about Jaynes work in probability theory and statistical mechanics. Didn't see relevant comments on quantum theory, but mainly scanned the docs.
The Collected Works of Everett has some narrative about their interaction: http://books.google.com/books?id=dowpli7i6TgC&pg=PA261&dq=jaynes+everett&hl=en&sa=X&ei=N9CdT9PSIcLOgAf-3vTxDg&ved=0CDYQ6AEwAQ#v=onepage&q&f=false
Hugh Everett marginal notes on page from E. T. Jaynes' "Information Theory and Statistical Mechanics" http://ucispace.lib.uci.edu/handle/10575/1140
Hugh Everett handwritten draft letter to E.T. Jaynes, 15-May-1957 http://ucispace.lib.uci.edu/handle/10575/1186
Hugh Everett letter to E. T. Jaynes, 11-June-1957 http://ucispace.lib.uci.edu/handle/10575/1124
E.T. Jaynes letter to Hugh Everett, 17-June-1957 http://ucispace.lib.uci.edu/handle/10575/1158
Given Jayne's interest in the foundations of quantum theory, it seems extremely unlikely to me that he was unaware of MWI. I've read most of his papers since around 1980, and can't recall a mention anywhere. Surely he was aware, and surely he had an opinion. I wish I knew what it was.
It is a personal peeve when any explanation of the Bell Inequality fails to mention the determinist Big Loophole: It rules out nearly all local hidden-variable theories, except those for which the entire universe is ruled by hidden variables. If you reject the assumption of counterfactual definiteness (the idea that there is a meaningful answer to the question "what answer would I have gotten, had I conducted a different experiment?"), local hidden variable theories are not ruled out. This leads to superdeterminism and theories which assume that, through either hidden variables stretching back to t=0 or backwards-in-time signals, the universe accounted for the measurement and the result was determined to match.
This is, in fact, what I held to be the most likely total explanation for years, until I better understood both its implications and MWI. Which, in fact, also rejects counterfactual definiteness. MWI does it one better; it rejects factual definiteness, the idea that there is a well-defined answer to the question "What answer did I get?", since in alternate worlds you got different answers.
That helps me. In his book Quantum Reality, Nick Herbert phrases it this way:
The Everett multiverse violates the CFD assumption because although such a world has plenty of contrafactuality, it is short on definiteness.
which is cutely aphoristic, but confused me. What does contrafactuality even mean in MWI?
Pointing out that MWI rejects factual definiteness clears things up nicely.
The problem with superdeterminism it that it cannot be Turing-computable (in a practical sense, that we would be able to build a machine that would tell us what would happen in any simple quantum experiment.) To see this, imagine you have a machine which tells you before any experiment whether a photon will go through the filter or not. Run this computer, say, a thousand times, then decide which filter to use depending on the result. (If it predicts ~%20, then do the one that should give you ~5.8%, and vice-versa). Unless the machine affects the results, you will find that it is wrong.
I fail to see how that has any relevance whatsoever. I think you are very confused about something, though I'm not sure what.
Talking about "Turing computability in a practical sense" is nonsensical; computability is defined by an infinite-tape machine with arbitrarily large finite time to compute, neither of which we have in a practical sense, and most cases where computability is in doubt make use of both properties.
Superdeterminism also doesn't need to care at all about the computer you've made to predict in advance what will happen. Unless you've found a way to "entangle" your computer with the hidden variables which determine the outcome of the result, the results it gives will know nothing about what the actual outcome will be, and just give you the Born probabilities instead.
My point was that if superdeterminism is true, it is not testable, because we can never get a full description of the rules within our universe.
And which other interpretation of quantum mechanics is Turing-computable, exactly?
In principle, you could (as mentioned) get some other process connected to the same hidden variables, in which case you could predict some events with perfect accuracy, which would be pretty definitive confirmation of the hidden variable theory.
Sorry for being a pain, but I didn't understand exactly what you said. If you're still an active user, could you clear up a few things for me? Firstly, could you elaborate on counterfactual definiteness? Another user said contrafactual, is this the same, and what do other interpretations say on this issue?
Secondly, I'm not sure what you meant by the whole universe being ruled by hidden variables, I'm currently interpreting that as the universe coming pre-loaded with random numbers to use and therefore being fully determined by that list along with the current probabilistic laws. Is that what you meant? If not, could you expand a little on that for me, it would help my understanding. Again, this is quite a long time post-event so if anyone reading this could respond that would be helpful.
Firstly, I am not an expert in QM, so you should take everything I say with a whole serving of salt.
1) Yes, counterfactual = contrafactual. What other interpretations of QM say about counterfactual definiteness I don't know. But wikipedia seems to give at least a cursory understanding to what is necessary for any interpretation to QM.
2) You could understand it that way, yes. Basically, the existence of hidden variables means 'just' that our current theory of QM is incomplete. So basically there is no collapsing wave function or decoherence or anything and all the seeming randomness we observe just comes from our not knowing which values those hidden variables take.
Again, if all I have said is complete and utter nonsense, please correct me!
Why can't this be used for FTL communication? If nothing is done at A, then the probability of seeing B go through the 40° but not the 0° one is 20.8%. If A is measured at 20°, then regardless of what was measured at A, the probability of seeing B go through the 40° but not the 0° should be less than 11%, without knowing A. If I'm making an incorrect assumption, someone please point it out here.
Previously in series: Entangled Photons
(Note: So that this post can be read by people who haven't followed the whole series, I shall temporarily adopt some more standard and less accurate terms; for example, talking about "many worlds" instead of "decoherent blobs of amplitude".)
The legendary Bayesian, E. T. Jaynes, began his life as a physicist. In some of his writings, you can find Jaynes railing against the idea that, because we have not yet found any way to predict quantum outcomes, they must be "truly random" or "inherently random".
Sure, today you don't know how to predict quantum measurements. But how do you know, asks Jaynes, that you won't find a way to predict the process tomorrow? How can any mere experiments tell us that we'll never be able to predict something—that it is "inherently unknowable" or "truly random"?
As far I can tell, Jaynes never heard about decoherence aka Many-Worlds, which is a great pity. If you belonged to a species with a brain like a flat sheet of paper that sometimes split down its thickness, you could reasonably conclude that you'd never be able to "predict" whether you'd "end up" in the left half or the right half. Yet is this really ignorance? It is a deterministic fact that different versions of you will experience different outcomes.
But even if you don't know about Many-Worlds, there's still an excellent reply for "Why do you think you'll never be able to predict what you'll see when you measure a quantum event?" This reply is known as Bell's Theorem.
In 1935, Einstein, Podolsky, and Rosen once argued roughly as follows:
Suppose we have a pair of entangled particles, light-years or at least light-minutes apart, so that no signal can possibly travel between them over the timespan of the experiment. We can suppose these are polarized photons with opposite polarizations.
Polarized filters block some photons, and absorb others; this lets us measure a photon's polarization in a given orientation. Entangled photons (with the right kind of entanglement) are always found to be polarized in opposite directions, when you measure them in the same orientation; if a filter at a certain angle passes photon A (transmits it) then we know that a filter at the same angle will block photon B (absorb it).
Now we measure one of the photons, labeled A, and find that it is transmitted by a 0° polarized filter. Without measuring B, we can now predict with certainty that B will be absorbed by a 0° polarized filter, because A and B always have opposite polarizations when measured in the same basis.
Said EPR:
EPR then assumed (correctly!) that nothing which happened at A could disturb B or exert any influence on B, due to the spacelike separations of A and B. We'll take up the relativistic viewpoint again tomorrow; for now, let's just note that this assumption is correct.
If by measuring A at 0°, we can predict with certainty whether B will be absorbed or transmitted at 0°, then according to EPR this fact must be an "element of physical reality" about B. Since measuring A cannot influence B in any way, this element of reality must always have been true of B. Likewise with every other possible polarization we could measure—10°, 20°, 50°, anything. If we measured A first in the same basis, even light-years away, we could perfectly predict the result for B. So on the EPR assumptions, there must exist some "element of reality" corresponding to whether B will be transmitted or absorbed, in any orientation.
But if no one has measured A, quantum theory does not predict with certainty whether B will be transmitted or absorbed. (At least that was how it seemed in 1935.) Therefore, EPR said, there are elements of reality that exist but are not mentioned in quantum theory:
This is another excellent example of how seemingly impeccable philosophy can fail in the face of experimental evidence, thanks to a wrong assumption so deep you didn't even realize it was an assumption.
EPR correctly assumed Special Relativity, and then incorrectly assumed that there was only one version of you who saw A do only one thing. They assumed that the certain prediction about what you would hear from B, described the only outcome that happened at B.
In real life, if you measure A and your friend measures B, different versions of you and your friend obtain both possible outcomes. When you compare notes, the two of you always find the polarizations are opposite. This does not violate Special Relativity even in spirit, but the reason why not is the topic of tomorrow's post, not today's.
Today's post is about how, in 1964,
BelldandyJohn S. Bell irrevocably shot down EPR's original argument. Not by pointing out the flaw in the EPR assumptions—Many-Worlds was not then widely known—but by describing an experiment that disproved them!It is experimentally impossible for there to be a physical description of the entangled photons, which specifies a single fixed outcome of any polarization measurement individually performed on A or B.
This is Bell's Theorem, which rules out all "local hidden variable" interpretations of quantum mechanics. It's actually not all that complicated, as quantum physics goes!
We begin with a pair of entangled photons, which we'll name A and B. When measured in the same basis, you find that the photons always have opposite polarization—one is transmitted, one is absorbed. As for the first photon you measure, the probability of transmission or absorption seems to be 50-50.
What if you measure with polarized filters set at different angles?
Suppose that I measure A with a filter set at 0°, and find that A was transmitted. In general, if you then measure B at an angle θ to my basis, quantum theory says the probability (of my hearing that) you also saw B transmitted, equals sin2 θ. E.g. if your filter was at an angle of 30° to my filter, and I saw my photon transmitted, then there's a 25% probability that you see your photon transmitted.
(Why? See "Decoherence as Projection". Some quick sanity checks: sin(0°) = 0, so if we measure at the same angles, the calculated probability is 0—we never measure at the same angle and see both photons transmitted. Similarly, sin(90°) = 1; if I see A transmitted, and you measure at an orthogonal angle, I will always hear that you saw B transmitted. sin(45°) = √(1/2), so if you measure in a diagonal basis, the probability is 50/50 for the photon to be transmitted or absorbed.)
Oh, and the initial probability of my seeing A transmitted is always 1/2. So the joint probability of seeing both photons transmitted is 1/2 * sin2 θ. 1/2 probability of my seeing A transmitted, times sin2 θ probability that you then see B transmitted.
And now you and I perform three statistical experiments, with large sample sizes:
(1) First, I measure A at 0° and you measure B at 20°. The photon is transmitted through both filters on 1/2 sin2 (20°) = 5.8% of the occasions.
(2) Next, I measure A at 20° and you measure B at 40°. When we compare notes, we again discover that we both saw our photons pass through our filters, on 1/2 sin2 (40° - 20°) = 5.8% of the occasions.
(3) Finally, I measure A at 0° and you measure B at 40°. Now the photon passes both filters on 1/2 sin2 (40°) = 20.7% of occasions.
Or to say it a bit more compactly:
What's wrong with this picture?
Nothing, in real life. But on EPR assumptions, it's impossible.
On EPR assumptions, there's a fixed local tendency for any individual photon to be transmitted or absorbed by a polarizer of any given orientation, independent of any measurements performed light-years away, as the single unique outcome.
Consider experiment (2). We measure A at 20° and B at 40°, compare notes, and find we both saw our photons transmitted. Now, A was transmitted at 20°, so if you had measured B at 20°, B would certainly have been absorbed—if you measure in the same basis you must find opposite polarizations.
That is: If A had the fixed tendency to be transmitted at 20°, then B must have had a fixed tendency to be absorbed at 20°. If this rule were violated, you could have measured both photons in the 20° basis, and found that both photons had the same polarization. Given the way that entangled photons are actually produced, this would violate conservation of angular momentum.
So (under EPR assumptions) what we learn from experiment (2) can be equivalently phrased as: "B was a kind of photon that was transmitted by a 40° filter and would have been absorbed by the 20° filter." Under EPR assumptions this is logically equivalent to the actual result of experiment (2).
Now let's look again at the percentages:
If you want to try and see the problem on your own, you can stare at the three experimental results for a while...
(Spoilers ahead.)
Consider a photon pair that gives us a positive result in experiment (3). On EPR assumptions, we now know that the B photon was inherently a type that would have been absorbed at 0°, and was in fact transmitted at 40°. (And conversely, if the B photon is of this type, experiment (3) will always give us a positive result.)
Now take a B photon from a positive experiment (3), and ask: "If instead we had measured B at 20°, would it have been transmitted, or absorbed?" Again by EPR's assumptions, there must be a definite answer to this question. We could have measured A in the 20° basis, and then had certainty of what would happen at B, without disturbing B. So there must be an "element of reality" for B's polarization at 20°.
But if B is a kind of photon that would be transmitted at 20°, then it is a kind of photon that implies a positive result in experiment (1). And if B is a kind of photon that would be absorbed at 20°, it is a kind of photon that would imply a positive result in experiment (2).
If B is a kind of photon that is transmitted at 40° and absorbed at 0°, and it is either a kind that is absorbed at 20° or a kind that is transmitted at 20°; then B must be either a kind that is absorbed at 20° and transmitted at 40°, or a kind that is transmitted at 20° and absorbed at 0°.
So, on EPR's assumptions, it's really hard to see how the same source can manufacture photon pairs that produce 5.8% positive results in experiment (1), 5.8% positive results in experiment (2), and 20.7% positive results in experiment (3). Every photon pair that produces a positive result in experiment (3) should also produce a positive result in either (1) or (2).
"Bell's inequality" is that any theory of hidden local variables implies (1) + (2) >= (3). The experimentally verified fact that (1) + (2) < (3) is a "violation of Bell's inequality". So there are no hidden local variables. QED.
And that's Bell's Theorem. See, that wasn't so horrible, was it?
But what's actually going on here?
When you measure at A, and your friend measures at B a few light-years away, different versions of you observe both possible outcomes—both possible polarizations for your photon. But the amplitude of the joint world where you both see your photons transmitted, goes as √(1/2) * sin θ where θ is the angle between your polarizers. So the squared modulus of the amplitude (which is how we get probabilities in quantum theory) goes as 1/2 sin2 θ, and that's the probability for finding mutual transmission when you meet a few years later and compare notes. We'll talk tomorrow about why this doesn't violate Special Relativity.
Strengthenings of Bell's Theorem eliminate the need for statistical reasoning: You can show that local hidden variables are impossible, using only properties of individual experiments which are always true given various measurements. (Google "GHZ state" or "GHZM state".) Occasionally you also hear that someone has published a strengthened Bell's experiment in which the two particles were more distantly separated, or the particles were measured more reliably, but you get the core idea. Bell's Theorem is proven beyond a reasonable doubt. Now the physicists are tracking down unreasonable doubts, and Bell always wins.
I know I sometimes speak as if Many-Worlds is a settled issue, which it isn't academically. (If people are still arguing about it, it must not be "settled", right?) But Bell's Theorem itself is agreed-upon academically as an experimental truth. Yes, there are people discussing theoretically conceivable loopholes in the experiments done so far. But I don't think anyone out there really thinks they're going to find an experimental violation of Bell's Theorem as soon as they use a more sensitive photon detector.
What does Bell's Theorem plus its experimental verification tell us, exactly?
My favorite phrasing is one I encountered in D. M. Appleby: "Quantum mechanics is inconsistent with the classical assumption that a measurement tells us about a property previously possessed by the system."
Which is exactly right: Measurement decoheres your blob of amplitude (world), splitting it into several noninteracting blobs (worlds). This creates new indexical uncertainty—uncertainty about which of several versions of yourself you are. Learning which version you are, does not tell you a previously unknown property that was always possessed by the system. And which specific blobs (worlds) are created, depends on the physical measuring process.
It's sometimes said that Bell's Theorem rules out "local realism". Tread cautiously when you hear someone arguing against "realism". As for locality, it is, if anything, far better understood than this whole "reality" business: If life is but a dream, it is a dream that obeys Special Relativity.
It is just one particular sort of locality, and just one particular notion of which things are "real" in the sense of previously uniquely determined, which Bell's Theorem says cannot simultaneously be true.
In particular, decoherent quantum mechanics is local, and Bell's Theorem gives us no reason to believe it is not real. (It may or may not be the ultimate truth, but quantum mechanics is certainly more real than the classical hallucination of little billiard balls bopping around.)
Does Bell's Theorem prevent us from regarding the quantum description as a state of partial knowledge about something more deeply real?
At the very least, Bell's Theorem prevents us from interpreting quantum amplitudes as probability in the obvious way. You cannot point at a single configuration, with probability proportional to the squared modulus, and say, "This is what the universe looked like all along."
In fact, you cannot pick any locally specified description whatsoever of unique outcomes for quantum experiments, and say, "This is what we have partial information about."
So it certainly isn't easy to reinterpret the quantum wavefunction as an uncertain belief. You can't do it the obvious way. And I haven't heard of any non-obvious interpretation of the quantum description as partial information.
Furthermore, as I mentioned previously, it is really odd to find yourself differentiating a degree of uncertain anticipation to get physical results—the way we have to differentiate the quantum wavefunction to find out how it evolves. That's not what probabilities are for.
Thus I try to emphasize that quantum amplitudes are not possibilities, or probabilities, or degrees of uncertain belief, or expressions of ignorance, or any other species of epistemic creatures. Wavefunctions are not states of mind. It would be a very bad sign to have a fundamental physics that operated over states of mind; we know from looking at brains that minds are made of parts.
In conclusion, although Einstein, Podolsky, and Rosen presented a picture of the world that was disproven experimentally, I would still regard them as having won a moral victory: The then-common interpretation of quantum mechanics did indeed have a one person measuring at A, seeing a single outcome, and then making a certain prediction about a unique outcome at B; and this is indeed incompatible with relativity, and wrong. Though people are still arguing about that.
Part of The Quantum Physics Sequence
Next post: "Spooky Action at a Distance: The No-Communication Theorem"
Previous post: "Entangled Photons"