Previously in seriesOn Being Decoherent

Yesterday's post argued that continuity of decoherence is no bar to accepting it as an explanation for our experienced universe, insofar as it is a physicist's responsibility to explain it.  This is a good thing, because the equations say decoherence is continuous, and the equations get the final word.

Now let us consider the continuity of decoherence in greater detail...

 On Being Decoherent talked about the decoherence process,

(Human-BLANK) * (Sensor-BLANK) * (Atom-LEFT + Atom-RIGHT)
        =>
(Human-BLANK) * ((Sensor-LEFT * Atom-LEFT) + (Sensor-RIGHT * Atom-RIGHT))
        =>
(Human-LEFT * Sensor-LEFT * Atom-LEFT) + (Human-RIGHT * Sensor-RIGHT * Atom-RIGHT)

At the end of this process, it may be that your brain in LEFT and your brain in RIGHT are, in a technical sense, communicating—that they have intersecting, interfering amplitude flows.

But the amplitude involved in this process, is the amplitude for a brain (plus all entangled particles) to leap into the other brain's state. This influence may, in a quantitative sense, exist; but it's exponentially tinier than the gravitational influence upon your brain of a mouse sneezing on Pluto.

By the same token, decoherence always entangles you with a blob of amplitude density, not a point mass of amplitude.  A point mass of amplitude would be a discontinuous amplitude distribution, hence unphysical.  The distribution can be very narrow, very sharp—even exponentially narrow—but it can't actually be pointed (nondifferentiable), let alone a point mass.

Decoherence, you might say, is pointless.

If a measuring instrument is sensitive enough to distinguish 10 positions with 10 separate displays on a little LCD screen, it will decohere the amplitude into at least 10 parts, almost entirely noninteracting.  In all probability, the instrument is physically quite a bit more sensitive (in terms of evolving into different configurations) than what it shows on screen.  You would find experimentally that the particle was being decohered (with consequences for momentum, etc.) more than the instrument was designed to measure from a human standpoint.

But there is no such thing as infinite sensitivity in a continuous quantum physics:  If you start with blobs of amplitude density, you don't end up with point masses.  Liouville's Theorem, which generalizes the second law of thermodynamics, guarantees this: you can't compress probability.

What about if you measure the position of an Atom using an analog Sensor whose dial shows a continuous reading?

Think of probability theory over classical physics:

When the Sensor's dial appears in a particular position, that gives us evidence corresponding to the likelihood function for the Sensor's dial to be in that place, given that the Atom was originally in a particular position.  If the instrument is not infinitely sensitive (which it can't be, for numerous reasons), then the likelihood function will be a density distribution, not a point mass.  A very sensitive Sensor might have a sharp spike of a likelihood distribution, with density falling off rapidly.  If the Atom is really at position 5.0121, the likelihood of the Sensor's dial ending up in position 5.0123 might be very small.  And so, unless we had overwhelming prior knowledge, we'd conclude a tiny posterior probability that the Atom was so much as 0.0002 millimeters from the Sensor's indicated position.  That's probability theory over classical physics.

Similarly in quantum physics:

The blob of amplitude in which you find yourself, where you see the Sensor's dial in some particular position, will have a sub-distribution over actual Atom positions that falls off according to (1) the initial amplitude distribution for the Atom, analogous to the prior; and (2) the amplitude for the Sensor's dial (and the rest of the Sensor!) to end up in our part of configuration space, if the Atom started out in that position.  (That's the part analogous to the likelihood function.)  With a Sensor at all sensitive, the amplitude for the Atom to be in a state noticeably different from what the Sensor shows, will taper off very sharply.

(All these amplitudes I'm talking about are actually densities, N-dimensional integrals over dx dy dz..., rather than discrete flows between discrete states; but you get the idea.)

If there's not a lot of amplitude flowing from initial particle position 5.0150 +/- 0.0001 to configurations where the sensor's LED display reads '5.0123', then the joint configuration of (Sensor=5.0123 * Atom=5.0150) ends up with very tiny amplitude.

 

Part of The Quantum Physics Sequence

Next post: "Decoherent Essences"

Previous post: "The Conscious Sorites Paradox"

New Comment
5 comments, sorted by Click to highlight new comments since:

A typo: 5.0150 vs. 5.10150

"The physicists imagine a matrix with rows like Sensor=0.0000 to Sensor=9.9999, and columns like Atom=0.0000 to Atom=9.9999; and they represent the final joint amplitude distribution over the Atom and Sensor, as a matrix where the amplitude density is nearly all in the diagonal elements. Joint states, like (Sensor=1.234 Atom=1.234), get nearly all of the amplitude; and off-diagonal elements like (Sensor=1.234 Atom=5.555) get an only infinitesimal amount."

This is not what physicists mean when they refer to off-diagonal matrix elements. They are talking about the off diagonal matrix elements of a density matrix. In a density matrix the rows and columns both refer to the same system. It is not a matrix with rows corresponding to states of one subsystem and columns corresponding to states of another. To put it differently, the density matrix is made by an outer product, whereas the matrix you have formulated is a tensor product. Notice if the atom and sensor were replaced by discrete systems, then if these systems didn't have an equal number of states then your matrix would not be square. In that case the notion of diagonal elements doesn't even make sense.

Perhaps it still works as long as he was saying not Sensor(0) but Sensor(Atom(0)) ie "Sensor says Atom is in state 0"

So it doesn't matter how many states the sensor has, it's how that state reflects on the state of the Atom that matters. Then the states could correspond one-to-one regardless of how many states the sensor has, and the probability would be concentrated in the diagonal.

Stephen: OK, have struck that section and will go back to see if I can figure out what the standard theory actually says.

exponentially tinier

exponentially narrow

[emphasis added]

WHAT is an exponential function of WHAT?