From a recent paper that is getting non-trivial attention...

"Quantum states are the key mathematical objects in quantum theory. It is therefore surprising that physicists have been unable to agree on what a quantum state represents. There are at least two opposing schools of thought, each almost as old as quantum theory itself. One is that a pure state is a physical property of system, much like position and momentum in classical mechanics. Another is that even a pure state has only a statistical significance, akin to a probability distribution in statistical mechanics. Here we show that, given only very mild assumptions, the statistical interpretation of the quantum state is inconsistent with the predictions of quantum theory. This result holds even in the presence of small amounts of experimental noise, and is therefore amenable to experimental test using present or near-future technology. If the predictions of quantum theory are confirmed, such a test would show that distinct quantum states must correspond to physically distinct states of reality."

From my understanding, the result works by showing how, if a quantum state is determined only statistically by some true physical state of the universe, then it is possible for us to construct clever quantum measurements that put statistical probability on outcomes for which there is literally zero quantum amplitude, which is a contradiction of Born's rule. The assumptions required are very mild, and if this is confirmed in experiment it would give a lot of justification for a phyicalist / realist interpretation of the Many Worlds point of view.

More from the paper:

"We conclude by outlining some consequences of the result. First, one motivation for the statistical view is the obvious parallel between the quantum process of instantaneous wave function collapse, and the (entirely non-mysterious) classical procedure of updating a probability distribution when new information is acquired. If the quantum state is a physical property of a system -- as it must be if one accepts the assumptions above -- then the quantum collapse must correspond to a real physical process. This is especially mysterious when two entangled systems are at separate locations, and measurement of one leads to an instantaneous collapse of the quantum state of the other.

In some versions of quantum theory, on the other hand, there is no collapse of the quantum state. In this case, after a measurement takes place, the joint quantum state of the system and measuring apparatus will contain a component corresponding to each possible macroscopic measurement outcome. This is unproblematic if the quantum state merely reflects a lack of information about which outcome occurred. But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each marcoscopically different component has a direct counterpart in reality.

On a related, but more abstract note, the quantum state has the striking property of being an exponentially complicated object. Specifically, the number of real parameters needed to specify a quantum state is exponential in the number of systems n. This has a consequence for classical simulation of quantum systems. If a simulation is constrained by our assumptions -- that is, if it must store in memory a state for a quantum system, with independent preparations assigned uncorrelated states -- then it will need an amount of memory which is exponential in the number of quantum systems.

For these reasons and others, many will continue to hold that the quantum state is not a real object. We have shown that this is only possible if one or more of the assumptions above is dropped. More radical approaches[14] are careful to avoid associating quantum systems with any physical properties at all. The alternative is to seek physically well motivated reasons why the other two assumptions might fail."

On a related note, in one of David Deutsch's original arguments for why Many Worlds was straightforwardly obvious from quantum theory, he mentions Shor's quantum factoring algorithm. Essentially he asks any opponent of Many Worlds to give a real account, not just a parochial calculational account, of why the algorithm works when it is using exponentially more resources than could possibly be classically available to it. The way he put it was: "where was the number factored?"

I was never convinced that regular quantum computation could really be used to convince someone of Many Worlds who did not already believe it, except possibly for bounded-error quantum computation where one must accept the fact that there are different worlds to find one's self in after the computation, namely some of the worlds where the computation had an error due to the algorithm itself (or else one must explain the measurement problem in some different way as per usual). But I think that in light of the paper mentioned above, Deutsch's "where was the number factored" argument may deserve more credence.

Added: Scott Aaronson discusses the paper here (the comments are also interesting).

New Comment
28 comments, sorted by Click to highlight new comments since: Today at 12:55 AM

Earlier, on Less Wrong: Quantum Non-Realism

The best you could theoretically do would not include saying anything like, "The wavefunction only gives us probabilities, not certainties." That, in retrospect, was jumping to a conclusion; the wavefunction gives us a certainty of many worlds existing. So that part about the wavefunction being only a probability, was not-quite-right. You calculated, but failed to shut up.


I have the highest respect for any historical physicists who even came close to actually shutting up and calculating, who were genuinely conservative in assessing what they did and didn't know. (...) My scorn is reserved for those who interpreted "We don't know why it works" as the positive knowledge that the equations were definitely not real.

This paper is not about the interpretations of quantum mechanics usually put in dichotomy 'round here. Bringing up many-worlds in this article is unnecessary.

"Interpreted statistically" refers, basically, to the idea often held pre-Bell's theorem that quantum mechanics (in any interpretation) wasn't "really describing what was going on." This paper has similar consequences to Bell's theorem (disproves that view if experiments back it up), though it's a bit less powerful but doesn't rely on entanglement.

[-][anonymous]12y50

I disagree very much, see the other comments about Bell's theorem.

On Google+, Matthew Leifer, a respected researcher in theoretical physics currently at University College London, replied as follows when he was asked what his conclusions were regarding the paper:

"Well, I knew this paper was coming, so it is not a surprise. Basically, it means that if you believe that quantum states are epistemic then you have two options left:

  1. neo-Copenhagenism: Claim that a deeper realist model was never needed to support an epistemic interpretation of the quantum state. The probabilities are just about measurement results, period.

  2. The ontological states have to be more bizarre than imagined in current approaches. For example, you could have retrocausality or “relational” degrees of freedom (whatever that means). Note that, one could also evade the theorem of this paper by claiming that quantum i.i.d. product states do not correspond to i.i.d. probability distributions in the ontological model. However, doing this does not evade a related theorem by Alberto Montina, which is based on a single system.

If neither of those options is to your taste, then you might as well become an Everettian or a Bohmian, since you are stuck with the state vector in your ontology in any case.

Overall, I would say that this result is not too surprising. I think that most people in the “psi-epistemic” camp already had the intuition that a psi-epistemic ontological model formulated in the usual way would not be possible. That is why most of us were already promoting other possibilities, e.g. Fuchs is in the neo-Copenhagen camp and Spekkens often mumbles things about relationalism. Personally, I am quite interested in the idea of retrocausal psi-epistemic hidden variable theories. It is at least a fairly clearly formulated problem to try and come up with one, whereas relationalism seems vague to me, at least as it is applied to quantum theory. If that doesn’t work out then I would probably end up being an Everettian. Despite the attraction of the Fuchsian program, realism has to win out in the end for me.”

I feel like if you understood this, you could have put it in your own words.

Did the theorem presented in this paper make any distinction between measurement with collapse and measurement without collapse? Would the proven theorem break down if collapse was how the world worked?

No. There is no such distinction between measurement mechanisms in this paper. Instead, this paper is about the difference between the wavefunction uniquely corresponding to the physical reality vs. only corresponding to physical reality "on average."

[-][anonymous]12y20

I think your criticism here is a bit shortsighted. Just look at the longer passage that I quoted in the OP. It directly deals with some ramifications if you hold that the collapse of the wavefunction is real, namely spooky action at a distance becomes an even worse problem. It's even harder to give an account for the physical quantum states separated by vast distance being more than just classically correlated. In the very next paragraph they mention that in interpretations in which no actual collapse has to happen, their result gives further credence to the idea that distinct quantum states are distinct physically real things.

It's as if you just want me personally to be wrong, because both the OP and the Leifer quote above deal with ramifications in the with-collapse cases vs. ramifications in the without-collapse cases. I don't see how you can say that choice to offer the quote means that I do not understand it. I also don't see how you can claim that Many Worlds is unrelated when the linked paper itself mentions ramifications for that case.

I didn't claim that this conclusively proves anything about collapse or ontological measurement, only that the ramifications do add something above and beyond Bell's theorem. And I stick by that.

Just look at the longer passage that I quoted in the OP. It directly deals with some ramifications if you hold that the collapse of the wavefunction is real, namely spooky action at a distance becomes an even worse problem.

There is more than one interpretation of the passage you quoted, and I think a more neutral interpretation is more likely. In the first paragraph they highlight interpretations with collapse, explore the implications for interpretations with collapse, and point out an unintuitive consequence: "This is especially mysterious when two entangled systems are at separate locations, and measurement of one leads to an instantaneous collapse of the quantum state of the other."

In the second paragraph they highlight interpretations without collapse, explore the implications for interpretations without collapse, and point out an unintuitive consequence: "But if the quantum state is a physical property of the system and apparatus, it is hard to avoid the conclusion that each marcoscopically different component has a direct counterpart in reality."

The paragraphs have the same structure. Ramifications are mentioned for both cases. So when you say "I also don't see how you can claim that Many Worlds is unrelated when the linked paper itself mentions ramifications for that case," it seems to me like you're reading asymmetrically. The authors appear to have simply explored the implications of their theorem for interpretations with and without collapse - and of course there were unintuitive bits for both, because it's quantum mechanics.

[-][anonymous]12y10

There is more than one interpretation of the passage you quoted, and I think a more neutral interpretation is more likely.

This is a good point. I should say, "One possible interpretation is...", but in either case I don't think my reliance on direct quotes to try to illustrate this stronger interpretation that I advocate should qualify as failure to understand on my part. As I read the second paragraph, it seems to straightforwardly apply to Many Worlds in an important way, but I am totally willing to accept the point of view that the implications are less salient. It was just that your original comment:

Bringing up many-worlds in this article is unnecessary.

seemed unproductive to me. In what sense is it unnecessary? Unnecessary for understanding the original result? Sure... but I didn't bring up the original result for its own sake, only to discuss implications for wave collapse.

Although there is no direct effect on the state of the evidence, I guess you're right that there can be an indirect effect. For example, 'collapse' could look better than 'no-collapse' given wavefunction non-realism, but 'no-collapse' could look better than 'collapse' given wavefunction realism. In this case, changing our position on wavefunction realism would change the opinion on collapse vs. no-collapse.

But this effect only occurs to the extent that people already believe in the things disproved (or called into question). People who took this "statistical sort-of-nonrealism" model seriously, rather than as merely an interesting idea, are pretty rare even in the physics world*. And here on LW? Fuggedaboutit.

* I've never run into one, and they never came up when I talked to someone who's working on this kind of stuff - mostly focused on the neo-Copenhagenists, to use Leifer's term, and testing some specific sorts of collapse.

Wasn't this already proven by Bell's Theorem?

then the quantum collapse must correspond to a real physical process.

Not according to Many Worlds. Waveform collapse has never been experimentally observed. Every observation is as would be predicted by simple decoherence.

Wasn't this already proven by Bell's Theorem?

For the most part, but violations of Bell's inequality can also be explained by action at a distance. The impossibility result in this paper is nevertheless really old. For example, conditional swap tests are well studied and incompatible with any statistical interpretation of the wave function (in the same slightly stronger sense of this paper). There are experimental results that invalidate these interpretations quite directly. The view of the wavefunction as a reflection of statistical ignorance has not been tenable for a very long time. The interesting thing here is the non-trivial attention part.

[-][anonymous]12y00

I think there is some confusion going on here. Bell's theorem tells you that local hidden variables cannot reproduce quantum mechanics, i.e. that the uncertainty in quantum outcomes is not like the uncertainty of a coin toss in that "if you hit a coin in the same place with the same force, it does the same thing," as Persi Diaconis said. This does not settle the question of whether wavefunction collapse is a physically real event or just a calculational tool that lets you predict what nature will do with some accuracy without giving you any hint into how it is that nature is doing it.

The paper shows that this "quantum-state-as-statistical-calculational-tool" approach is inconsistent with a few reasonable assumptions.

Also, Many Worlds does tell you that quantum collapse corresponds to a physically real process, namely the usual linear evolution rule. It tells you that the "extra" measurement phenomenon postulated by other interpretations is not really there. So none of this is in conflict with MW; in fact, it adds credibility beyond Bell's theorem.

I don't consider decoherence the same thing as wave-form collapse. Wave-form collapse seems to imply that it suddenly collapses into an eigenstate, rather than just slowly morphs from one blob to two.

[-][anonymous]12y30

I am treating "wavefunction collapse" as a generic term for "whatever it is that causes 'me' to see the outcomes that I do." Orthodox views treat this wavefunction collapse as an extra "measurement" rule and then mumble around the issue of what exactly constitutes a measurement, usually trying to say that none of it is 'physically real' so such questions are moot. Many Worlds treats the wavefunction collapse as a parochial artifact of having been in a particular Everett branch and figuring that out by becoming entangled with some particular other particles.

But I agree that if one defines wavefunction collapse to be only that sort of instantaneous updating to only one particular state that the orthodox views take, then Many Worlds language just does away with it all together. I prefer to use the same term for both phenomena because I have found that it helps people who are uneasy with MW to realize that it's not all that weird.

In my view, the result of the linked paper weakens the orthodox position of denying that "wavefunction collapse" is physically real.

This result is much stronger than Bell's theorem. I don't understand all the details but it seems to apply in a much broader range of contexts. ETA: To be more clear, Bell's Theorem applies to entangled particles. This doesn't seem to require entanglement as long as there's not too much pathological behavior.

It sounds like this would provide some testable predictions, am I understanding that correctly?

[-][anonymous]12y00

Yes. We would be able to set up experiments where one interpretation would assign non-zero chance to an outcome that has zero quantum amplitude. Run the experiment and observe the outcome enough times and if the positive-chance-but-zero-amplitude event ever happens, then quantum states are merely a calculational tool. If the zero-amplitude event never happens, then as the number of repetitions goes to infinity, you can be as close to certain that the quantum state is real as you'd like. It's worth mentioning that almost all physicists would believe the latter is more likely, i.e. that Born's rule will keep holding up under experiment. But few want to carry that to further conclusions. This paper adds another layer of difficulty in believing the amplitude rule but not believing that amplitude is physically real.

[This comment is no longer endorsed by its author]Reply

Posts use html not the quick markup used in comments.

[-][anonymous]12y00

I was trying to use the html (note the successful links in the first line) but every time I try to do blockquotes with html, it screws up the font. I'm not sure why and it is unique to my experience editing html on LessWrong.

(Fixed the markup.)

Note that Scott Aaronson is one of the world's leading experts in quantum computation and he's roughly agnostic about MWI.

The whole idea that quantum computation is evidence for Many Worlds seems dodgy to me given that one then has to ask why quantum computation seems to be able to do so little. It looks like BQP is a proper subset of PSPACE and likely doesn't include all of NP. If one takes that seriously and believes in Many Worlds one then has to ask why quantum computers are so weak.

[-][anonymous]12y50

It's funny you bring this up, because I am in this course with Scott right now.

Note that the issue is whether quantum states are physically real, in which case the fact that you exploit canceling amplitude of quantum states in Shor's algorithm would be evidence of many worlds in the sense of many neatly factorizing amplitude blobs. None of this cares whatsoever whether quantum computing is more powerful than classical computing, only about how it is doing the computation. Also, bounded error quantum algorithms pose another issue, since the outcome can be viewed statistically (which the linked paper casts into doubt).

We just had a sequence of classes on quantum computation and posted several interesting debates to our course blog. In particular, the paper I linked above was posted in the comments thread for our discussion about whether quantum computing can or cannot offer insight in the debate over interpretations.

Look here for the blog post about Many Worlds and look here for the new posts about quantum computing and closed timelike curves.

My username in those discussions is 'bobthebayesian' and I would welcome criticism of my ideas if it is constructive and helps me update to better understanding. However, I think Scott wants us to keep the blog mostly for students in the class and with few or no posts from outsiders.

For what it's worth, Scott presents great challenges for Many Worlds that do not suffer from the usual shock level paranoia that most people have when the hear about it. He has no problem believing the trippy / weird consequences of Many Worlds. He said it well as follows on that class blog:

As I see it, the question then is whether we should be satisfied with MWI’s clear advantages in simplicity and elegance, or whether we should continue to search for a less “trippy” explanation. (After all, there are many simple, elegant theories whose “only” flaw is their failure to account for various aspects of our experience!)

I think the paper linked in the OP gives more reason to be satisfied with the simplicity of Many Worlds, beyond Bell's theorem.

Also, if we want an argument from authority, Hawking, Feynman, Deutsch, and Weinberg all side with Many-Worlds. Yes, they have some nuanced beliefs.. Weinberg said it interestingly when he said Many Worlds is "like democracy... terrible except for the alternatives." I don't think that this (nor Aaronson's agnosticism towards MWI) "proves" anything other than that it is a difficult problem. I, for one, do not share Deutsch's view that MWI is straightforwardly obvious, especially not when you consider all the issues in trying to understand why we see the Born probabilities instead of something else (see here).

I do think that it is straightforward that we should not postulate "measurement" as an ontologically basic thing, though. And this is why MWI is the best theory we have so far. (Bohmian mechanics would be worth consideration if it weren't for predictions that don't agree with experiment and the inherent underdetermination problem that it suffers.)

I think the paper linked in the OP gives more reason to be satisfied with the simplicity of Many Worlds, beyond Bell's theorem.

This seems right. Moreover, if I'm reading it correctly (although this is far from my area of expertise) it suggests that any consistent interpretation other than MWI will likely have the same weird aspects as MWI or others of equivalent weirdness. This makes MWI both stronger and it means that people who are holding out because they think that something else will come along are more likely out of luck.

If one takes that seriously and believes in Many Worlds one then has to ask why quantum computers are so weak.

No interpretation of quantum mechanics says anything at all about the extent of BQP, which is (as you well know) a purely mathematical question that has nothing to do with the laws of physics.

MWI implies that quantum computers compute BQP, and depending on how you specify Copenhagen, it implies either that quantum computers compute P or that they compute some mystical complexity class whose definition depends on the notion of "observer." This is the sense in which quantum computation provides evidence for MWI. Your comment is unrelated to the reasons for Scott Aaronson's agnosticism.

I think we're discussing different aspects. The essential argument for MWI based on computational issues I'm addressing is that in the second to last paragraph of the above post:

On a related note, in one of David Deutsch's original arguments for why Many Worlds was straightforwardly obvious from quantum theory, he mentions Shor's quantum factoring algorithm. Essentially he asks any opponent of Many Worlds to give a real account, not just a parochial calculational account, of why the algorithm works when it is using exponentially more resources than could possibly be classically available to it.

The problem with that sort of argument is that it proves too much since one would then have to explain why one in fact only gets a bit more computational power over classical systems.

The problem with that sort of argument is that it proves too much since one would then have to explain why one in fact only gets a bit more computational power over classical systems.

I don't see what you mean. Quantum computers seem to use this additional resource which is not available classically, as the existence of any classically impossible quantum algorithm shows. This argument doesn't show that quantum computers get arbitrary access to this additional resource.

If I claim to have access to non-classical physics and show you one classically impossible feat, you should probably accept my argument. It is not compelling if you say "but what about this other classically impossible feat which you cannot achieve" and then ignore the explanation.

Well, but no one is disagreeing that quantum computers have access to non-classical resources. The problem is that explaining that by saying one has access to resources in other parts of the wavefunction creates the question "why do you only have a tiny bit of access" which about as large a question. It isn't at all clear that that's at all more satisfactory than simply saying that the actual laws of physics don't work as our intuition would expect them to.

why do you only have a tiny bit of access

Because the effect of the waveform drops off quickly with distance.

MWI predicts that it should be difficult to use these resources. Classical physics predicts that it should be impossible. Ergo, the fact that they're difficult to access is evidence for MWI.

[-][anonymous]12y00

Exactly. It makes no difference how powerful quantum computers are for Deutsch's argument. If we had waited exponential time for a classical computer to do it, we would not wonder "where the number was factored." Waiting only polynomial time for it to be factored then begs the question.

I am well aware of the extremely interesting complexity limitations of quantum computing. It definitely only extends computational capability a little bit -- and we still can't even prove that P does not equal NP. But none of this is relevant to Deutsch's "where was the number factored" argument. He is saying that if quantum states are physically real and not just a calculational tool, then you have to give a physical account of how Shor's algorithm works and the orthodox views of wavefunction collapse could not do that.