Today's post, On Being Decoherent was originally published on 27 April 2008. A summary (taken from the LW wiki):

 

When a sensor measures a particle whose amplitude distribution stretches over space - perhaps seeing if the particle is to the left or right of some dividing line - then the standard laws of quantum mechanics call for the sensor+particle system to evolve into a state of (particle left, sensor measures LEFT) + (particle right, sensor measures RIGHT). But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Where Experience Confuses Physicists, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
22 comments, sorted by Click to highlight new comments since:

Does anybody else not like the general phrasing "The system is in the superposition STATE1 + STATE2" ?

The way I'm thinking of it there is no such thing as a superposition. There is simply more than one configuration in the (very recent) past that contributes a significant amount of amplitude to the "current" configuration.

Have I got this wrong?

I think it is a good way to map what people have commonly called "superposition," but the sentence should probably be "The system is in the superposition STATE1 + STATE2, relative to STATE 3, where STATE 3 roughly factors out". STATE 3 in this case is usually an observer. I mean, if I flip a "quantum coin" and I have not told you if it is heads or tails, then the coin (and I) is in a superposition of "HEADS + TAILS" relative to you, but due to decoherence on my end, it is not in a superposition relative to me. For me this was an important concept to learn, as it helped me see that "many worlds" is a local and non-discrete phenomenon.

And another quantum-related question. - In The Fabric of the Cosmos by Brian Greene (p. 196), he describes a setup of the two slit experiment where half of the particles have their "which way" information recorded, thus causing decoherence and not showing an interference pattern, and the other half of the particles are not measured, and thus do show an interference pattern. After the fact one can look at which photons were not measured, and these do indeed form the interference pattern.

However, he then goes on to explain an identical setup, with the difference that the decision as to whether to measure the 1/2 of the particles can be made many (light) years after the photons register on the screen, and only later, when the person making this decision light years away comes and tells you whether they measured or not, do you see if the unmeasured photons make an interference pattern.

This would all make sense to me IF there was no way to distinguish a totally non-interfering pattern, and a non-interfering pattern overlaid with an interfering one. Intuitively it seems like one WOULD be able to distinguish this, with a pretty high degree of confidence, by subtracting an "average" non-interfering pattern from the total pattern. Is this not the case?

BTW, I have been re-reading the QM sequence every 6 months or so since it was first posted, and get a bit more out of it each time. I am AMAZED at how it has explained things that, before reading it, seemed so freaky and inexplicable to me that it bordered on the supernatural.

So this is sorta off-topic for this thread, but I cannot see where one can start a new one. I posted the following questions at http://lesswrong.com/lw/q2/spooky_action_at_a_distance_the_nocommunication/, as I cannot find the "rerun" version of it. Anyway, here goes. FWIW, the topic was about EPR experiments.

For all these types of experiments, how do they "aim" the particle so it hits its target from far away? It would seem that the experimenters would know pretty much where the particle is when it shoots out of the gun (or whatever), so would not the velocity be all over the place? In the post on the Heisenberg principle, there was an example of letting the sun shine through a hole in a piece of paper, which caused the photons to spread pretty widely, pretty quickly.

Does the polarization vector change as the photon moves along? It seems to be very similar to a photon's "main" wave function, as it can be represented as a complex number (and is even displayed as an arrow, like Feynman uses). But I know those Feynman arrows spin according to the photon's wavelength.

Finally - and this is really tripping me up - why can we put in the minus sign in the equation that you say "we will need" later, instead of a + sign? If you have two blobs of amplitude, you need to add them to get the wave function, yes? If that is not the case, I have SEVERELY misunderstood the most basic posts of this sequence.

For all these types of experiments, how do they "aim" the particle so it hits its target from far away? It would seem that the experimenters would know pretty much where the particle is when it shoots out of the gun (or whatever), so would not the velocity be all over the place?

Only if they make the departing aperture small. A wider aperture allows the departing wave to be tight.

Does the polarization vector change as the photon moves along?

It depends which basis you look at it in. It is conventional to consider a photon's 'polarization' to be ploarization subspace that contains all of its time dependence. The phase then indicates the rest of its state. However, you can look at it other ways. A circularly polarized photon moving +z can be considered as a rapid shift between various orientations of +x and +y polarization... but it's simpler to just let it be in a circular polarization state and let the phase vary. A photon's state in this sense IS its 'main' wavefunction as you call it. There is no distinction. People usually shorthand think of a photon to have perfectly-defined momentum, but of course that would mean the photon extends through all of space. Real photons have multiple momentum components, and form a wavepacket or a static state. In particular, and very relevantly, you can construct electromagnetic field states (photons) that are inverse square laws - the static electrical field from a charge - and these have a very broad momentum distribution.

why can we put in the minus sign in the eqation that you say "we will need" later, instead of a + sign?

I can't find any minus signs in this post, but to take a stab in the dark at whatever it is you're referring to, subtraction is the special case of addition after one of a particular set of phase shifts.

[-]Shmi-20

But when we humans look at the sensor, it only seems to say "LEFT" or "RIGHT", never a mixture like "LIGFT". This, of course, is because we ourselves are made of particles, and subject to the standard quantum laws that imply decoherence. Under standard quantum laws, the final state is (particle left, sensor measures LEFT, human sees "LEFT") + (particle right, sensor measures RIGHT, human sees "RIGHT").

If there are two nearly identical copies of me in the same place, why is there no further interaction between them, resulting in my seeing "LIGFT"? (Well, now that I think of it, I do see "LIGFT", if only because EY wrote it.) Yes, I know, the magical password is "decoherence". How helpful.

Shminux, I trust you do know the actual answer to this, based on your demonstrated knowledge of QM. The essay does a qualitative job of this, here:

There are no plausible Feynman paths that end up with both LEFT and RIGHT sending amplitude to the same joint configuration. There would have to be a Feynman path from LEFT, and a Feynman path from RIGHT, in which all the quadrillions of differentiated particles ended up in the same places. So the amplitude flows from LEFT and RIGHT don't intersect, and don't interfere.

In order for the joint observer-observed system to be coherent, the two cases need to be reconcilable.

How is this a magical password? He pulls out the guts of decoherence and shows them to the reader!

Well, part of the guts. He's given a sufficient but not necessary criterion for decoherence.

If there are two nearly identical copies of me in the same place, why is there no further interaction between them

Your two copies differ by states of many neurons, that's billions of particles. They are not "nearly identical".

It is tempting to think about "one different thought" or "one different perception" as very small changes. But on particle level those are huge changes. A small change on a particle level is something you can't notice, and therefore you can't notice as those copies of you interact... and when the small change becomes big enough, your copies are already decoherent.

[-][anonymous]00

It seems like the relevant section is

By hypothesis, Sensor-LEFT is a different state from Sensor-RIGHT—otherwise it wouldn't be a very sensitive Sensor. So the final state doesn't factorize any further; it's entangled.

But this entanglement is not likely to manifest in difficulties of calculation. Suppose the Sensor has a little LCD screen that's flashing "LEFT" or "RIGHT". This may seem like a relatively small difference to a human, but it involves avogadros of particles—photons, electrons, entire molecules—occupying different positions.

So, since the states Sensor-LEFT and Sensor-RIGHT are widely separated in the configuration space, the volumes (Sensor-LEFT Atom-LEFT) and (Sensor-RIGHT Atom-RIGHT) are even more widely separated.

The question still left in my mind is what is meant by "widely separated", and why states that are widely separated have volumes that are widely separated.

For example, take a chaotic system evolving from an initial state. (Perhaps an energetic particle in a varying potential field.) After evolving, the probability that was concentrated at that initial state flows out to encompass a rather large region of configuration space. Presumably in this case the end states can be widely separated, but the probability volumes are not.

What does 'widely separated' mean? I suspect that this can be defined without recourse to a full detailed treatment of decoherence. Let's give that a try. (I'm going to feel free to edit this until someone responds, since I'm kind of thinking out loud).

The obvious but wrong answer is that given two initial components |a> and |b>, the measurement process produces consequences such that U|a> is orthogonal to U|b>.. of course, that's trivially true since = = 0. Even if they were overlapping everywhere, the unitary process of time evolution would make their overlap integral keep canceling out. And meanwhile they would be interfering with each other - not independent at all.

What we need is for the random phase approximation to become applicable. If we are able to consider a local system, it can be by exchanging a particle with the outside. The applicability of this approach to a single universal wavefunction is not clear. We will need to be able to speak of information loss and dissipation in unitary language.

I had another flawed but more promising notion, that one could get somewhere by considering a second split after a first. You have two potentially decoherent vectors |a> and |b>, with = 0; then you split |b> = |c> + |d> such that = 0. The idea was that |a> and |b> are 'widely separated' if any choice of |c> and |d> will have = = 0... except that you can always choose some crazy superposed |c> that explicitly overlaps |a> and |b>.

Based on this, I thought instead about an operator that takes macro-scale measurements, like 'is that screen reading X'. Then you can require that |c> and |d> each are in the same kernel of each these operators as |b> is. That might be sufficient even without splitting |b> - as long as you can construct a macro-scale measurement that indicates |b> instead of |a>, then they're going to be distinguishable by that, so they won't interfere. But that in itself doesn't prove that you can't smash the computer and get them to interfere again.

Of course, all that puts it backwards, focusing on how you could possibly establish a perfectly ordinary decoherent state, rather than focusing on how you maintain an utterly abnormal coherent state (this is the approach the sequence suggests).

You need to be able to split of a subspace of the Hilbert space such that the 'outside' is completely independent of the 'inside'. Nearly completely causally independent, at least on some time domain. For example, in an interferometer, all the rest of the universe depends on is that the inside of the interferometer is only doing interferometry, not, say, exploding. If there were such a dependence (and it was a true dependence such that the various outcomes actually produced a different effect), then the joint configurations rule would kick in, and the subspace could not interfere because of the different effects on the outside.

The problem here is, I do not know of any mathematical language for expressing causal dependence in quantum mechanics. If there is one, this is a very brief statement in it.

[-]gjm-10

There is a (very brief) account in that post of what decoherence is and why it leads to non-interaction. There is a much more extensive discussion of the point in the previous (linked) post on decoherence.

You do realise that the one-paragraph summary here is only a one-paragraph summary, and that there's a lot more in the original post, yes?

[-][anonymous]20

There's not that much more.

There's some claims about how the system evolves, and some more handwaving with Feynman path integrals.

Just because you call it hand-waving doesn't make it so. There really is no plausible method by which a measurement process resulting in that LCD screen being read is undone, resulting in a state universally identical to one in which the opposite measurement had been made.

[-][anonymous]20

I don't disagree with your second sentence. Regarding the first, I don't think there's really any argument about whether or not it's handwaving. The question is whether or not it's justified handwaving in the pursuit of a pseudo-rigorous understanding of quantum mechanics.

I'm comfortable with him saying that time evolution is linear, because there are intuitive reasons for it to be so, and he presents those reasons elsewhere.

I'm less comfortable with the use of them in this article. Take the following quote:

There are no plausible Feynman paths that end up with both LEFT and RIGHT sending amplitude to the same joint configuration. There would have to be a Feynman path from LEFT, and a Feynman path from RIGHT, in which all the quadrillions of differentiated particles ended up in the same places. So the amplitude flows from LEFT and RIGHT don't intersect, and don't interfere.

It's really hard to make sense of this given the way Feynman paths are treated earlier. I can make sense of it if I rely on what traditional training I've had in quantum mechanics, but not everyone has that background.

'Handwaving' describes vagueness. Yet, just how much vagueness qualifies as 'handwaving' is not well-defined!

This builds on the result of 'joint configurations', which is that for interference to occur, everything needs to line up. EVERYTHING. Otherwise, it's offset in some dimension or other, and not really in the same 'place' at all. With that in place, this is a short step to take.

[-][anonymous]00

'Handwaving' describes vagueness. Yet, just how much vagueness qualifies as 'handwaving' is not well-defined!

I don't disagree? I'm making essentially an aesthetic point.

I thought I qualified how much vagueness was acceptable -- there is vagueness that is pedagogically useful, and there is vagueness that is not pedagogically useful, and my accusation of handwaving is isomorphic to saying that the vagueness with Feynman paths here is not pedagogically useful.

This builds on the result of 'joint configurations', which is that for interference to occur, everything needs to line up. EVERYTHING. Otherwise, it's offset in some dimension or other, and not really in the same 'place' at all. With that in place, this is a short step to take.

I can't follow this explanation at all. Too many ambiguous pronouns. But this is okay; the goal isn't to explain it to me -- I have all the training in quantum mechanics that I care to have.

"Everything needs to line up" is the key point, and it once you understand it it's really quite simple. It just means that there is more than one way to get to the same configuration state. Think about history seeming to branch out in a tree-like way, as most people tend to imagine. But if two branching paths are not far apart (e.g. differing by just a single photon) then it is easy for then to come back together. History changes from a tree to a graph. Being a graph means that some point has two history paths (actually every point has an infinite amount of ancestry but most of it cancels out). When you more than one history path both constructive and destructive interference can take place, and destructive means that the probability of some states goes down, i.e. some final states no longer happen (you no longer see a photon appearing in some places).

Is this making it clearer or have I made it worse? ;-)

[-][anonymous]00

See the comments on How Many Worlds? for why introducing the graph metaphor is confusing and negatively helpful to beginners.

Well, true, a graph implies a discreteness that does not correlate closely to a continuous configuration space. I actually think of it as the probability of finding yourself in that volume of configuration space being influenced by "significant" amplitudes slowing from more than one other volume of configuration space, although even that is not a great explanation as it suggests a ticking of a discrete time parameter. A continuously propagating wavefront is probably a much better analogy. Or we can just go into calculus mode and consider boxes of configuration space which we then shrink down arbitrarily while taking a limit value. But sometimes it's just easier to think "branches" ;-)

[-][anonymous]40

I'm tapping out.

Nobody seems to think EY's exposition is an issue, and you're the second person who's tried -- and I can't understand the motivation for this -- to explain the underlying QM to me in vague metaphors that neither reflect the underlying theory nor present a pedagogical simplification.

But it does reflect the underlying theory (though it does take special cases and simplifies), and it does present a pedagogical simplification (because it's a hell of a lot easier than solving huge quantum systems. Heck, it's not even a metaphor. A DAG is blank enough - has few enough intrinsic properties - to be an incomplete model instead of a metaphor.

Does anything other than a fully quantum description of a system using only an interacting-particle hamiltonian with no externally applied fields count as a non-vague non-metaphor?