I used to read Lubos Motl's blog (maybe between 2005-2010 or something?), first because I had had him as a QFT professor and liked him personally, and later because, I dunno, I found his physics posts informative and his non-physics ultra-right-wing posts weirdly entertaining and interesting in an insane way. Anyway he used to frequently post rants against the Many Worlds Interpretation, and in favor of the Copenhagen interpretation. (Maybe he still does, I dunno.) After reading those rants and sporadically pushing back in the comments, I maybe came to understand his perspective, though I could be wrong.
So, here's my attempt to describe Lubos's perspective (which he calls the Copenhagen interpretation) from your (and my) perspective:
Every now and then, you learn something about what Everett branch you happen to be in. For example, you peer at the spin-o-meter and it says "This electron is spin up". Before you looked, you had written in your lab notebook that the (partial trace) density matrix for the electron was [[0.5, 0], [0, 0.5]]. But after you see the spin-o-meter, you pull out your eraser and write a new (partial trace) density matrix for the electron in your lab notebook, na...
Yeah... to paraphrase Deutsch, that just sounds like multiple worlds in a state of chronic denial. Also, it is possible for other Everett branches to influence yours, the probability just gets so infinitesimally tiny as they decohere that it's negligible in practice.
I'm very confused by the mathematical setup. Probably it's because I'm a mathematician and not a physicist, so I don't see things that would be clear for a physicists. My knowledge of quantum mechanics is very very basic, but nonzero. Here's how I rewrote the setup part of your paper as I was going along, I hope I got everything right.
You have a system which is some (seperable, complex, etc..) Hilbert space. You also have an observer system O (which is also a Hilbert space). Elements of various Hilbert spaces are called "states". Then you have the joint system of which is an element of, which comes with a (unitary) time-evolution . Now if were not being observed, it would evolve by some (unitary) time-evolution . We assume (though I think functional analysis gives this to use for free) that is an orthonormal basis of eigenfunctions of , with eigenvalues .
Ok, now comes the trick: we assume that observation doesn't change the system, i.e. that the -component of is . Wait, that doesn't make sense! doesn't have an "-component", something like an -component makes sense only for pure states, if you have mixed states then the idea breaks dow...
I mean I could accept that the Schrödinger equation gives the evolution of the wave-function, but why care about its eigenfunctions so much?
I'm not sure if this will be satisfying to you but I like to think about it like this:
This isn't a derivation but it makes the mathematical structure of QM somewhat plausible to me.
The confusion on the topic of interpretations comes from the failure to answer the question, what is an "interpretation" (or, more generally, a "theory of physics") even supposed to be? What is its type signature, and what makes it true or false?
Imagine a robot with a camera and a manipulator, whose AI is a powerful reinforcement learner, with a reward function that counts the amount of blue seen in the camera. The AI works by looking for models that are good at predicting observations, and using those models to make plans for maximizing blue.
Now our AI discovered quantum mechanics. What does it mean? What kind of model would it construct? Well, the Copenhagen interpretation does a perfectly good job. The wave function evolves via the Schrodinger equation, and every camera frame there is collapse. As long as predicting observations is all we need, there's no issue.
It gets more complicated if you want your agent to have a reward function that depends on unobserved parameters (things in the outside world), e.g. the number of paperclips in the universe. In this case Copenhagen is insufficient, because in Copenhagen an observable is undefined when you don't measure it. But MWI also doe...
I think that physics is best understood as answering the question “in what mathematical entity do we find ourselves?”—a question that Everett is very equipped to answer. Then, once you have an answer to that question, figuring out your observations becomes fundamentally a problem of locating yourself within that object, which I think raises lots of interesting anthropic questions, but not additional physical ones.
I disagree. "in what mathematical entity do we find ourselves?" is a map-territory confusion. We are not in a mathematical entity, we use mathematics to construct models of reality. And, in any case, without "locating yourself within the object", it's not clear how do you know whether your theory is true, so it's very much pertinent to physics.
Moreover, I'm not sure how this perspective justifies MWI. Presumably, the wavefunction contains multiple "worlds" hence you conclude that multiple worlds "exist". However, consider an alternative universe with stochastic classical physics. The "mathematical entity" would be a probability measure over classical histories. So it can also be said to contains "multiple worlds". But in that universe everyone would be comfortable with saying there's just one non-deterministic world. So, you need something else to justify the multiple worlds, but I'm not sure what. Maybe you would say the stochastic universe also has multiple worlds, but then it starts looking a like a philosophical assumption that doesn't follow from physics.
FYI, the SEP article on decoherence in QM is not anonymous, but rather by Guido Bacciagaluppi, which you can find by scrolling to the bottom.
Accepting that probability is some function of the magnitude of the amplitude, why should it be linear exactly under orthogonal combinations?
Sorry, but the Copenhagen interpretation, with the important proviso that observables, not 'the wavefunction', are what's real, is presently the best 'interpretation' of quantum mechanics, because it's the only one that actually works in all situations where QM is applied.
As someone wishing to understand reality, you are of course free to speculate that the wavefunction is a real thing and not just a step in a calculation, and that it is some kind of multiverse. But if you then wish to proclaim that this is obviously the truth, then the onus is on yo...
In quantum field theory the wave function is an operator at each point in spacetime, and it works out that everything is consistent with experiments across reference frame changes and nothing travels faster than the speed of light, etc. That's all experimentally established. Can you say again what's the problem?
everything that is frame-dependent vanishes by the end of the calculation
I mean, velocity is frame-dependent, right? You can measure velocity, it doesn't vanish at the end of the calculation... It's different in different reference frames, of course, and that's fine, because its reference-frame-dependence is consistent with everything else and with experiments. So what do you mean? Sorry if I'm just not understanding you here, you can try again...
Hmm, I guess you could make it clearer by focusing on gauge dependence. "The wave function is gauge dependent, so how can you say it's "real"?" Is that similar to your argument? If so, I guess I'm sympathetic to that argument, and I would say that the "real" thing is the equivalence class of wave functions up to gauge transformations, or something like that...
First, I was already on board with all the content of this post. My question is this: would there be any difference, or would it help resolve any confusion for anyone, if instead we said something like "There is still just one 'world' in the sense that there's one universal equation constantly following the same rule. The math shows that that world consists of many non-interacting parts, and the number of non-interacting parts grows with time. For convenience, when performing experiments, we ignore the non-interacting components, just like we already ignore components outside the experimental system, only now we also re-normalize to exclude the non-interacting components"?
Do you see any technical or conceptual challenges which the MWI has yet to address or do you think it is a well-defined interpretation with no open questions?
What's your model for why people are not satisfied with the MWI? The obvious ones are 1) dislike for a many worlds ontology and 2) ignorance of the arguments. Do you think there are other valid reasons?
This added to my layperson's understanding of both MWI and quantum mechanics more generally.
Immediately under the subhead "The Apparent Collapse of The Wave Function," what is a-sub-i in the initial state?
Let W be the shortest program which computes the wave equation. Since the wave equation is a component of all quantum theories, it must be that |W| ≤ |Ti|. Thus, the smallest that any Ti could possibly be is |W|, such that any Ti of length |W| is at least twice as probable as a Ti of any other length. The Everett interpretation is such a Ti, since it requires nothing else beyond wave mechanics, and follows directly from it.
What exactly are we doing here? Calculating the complexity of a MW ontology versus a Copenhagen ontology, or figuring out the simple...
One of the most puzzling aspects of quantum mechanics is the fact that, when one measures a system in a superposition of multiple states, it is only ever found in one of them.
When a QM state is written concretely enough to make a prediction, it is written on a basis. If it can be written as a single term on a suitable choice of basis, then it is what is known a a pure state. Note that there is no fact of the matter about whether a pure state is superposed, or how it is superposed, unless there is an objective fact about its basis. If the basis is not an...
Wave function is described using imaginary numbers. If we "taking the wave function seriously as a physical entity" - does it mean that imaginary part have physical sense? For example, if a cat has amplitude (0;1) does it mean that real part of doesn't exist, but imaginary is full of life?
The following post is an adaptation of a paper I wrote in 2017 that I thought might be of interest to people here on LessWrong. The paper is essentially my attempt at presenting the clearest and most cogent defense of the Everett interpretation of quantum mechanics—the interpretation that I very strongly believe to be true—as I could (at least using only undergraduate wave mechanics, which was the level at which I wrote the paper). My motivation for posting this now is that I was recently talking with a colleague of mine who mentioned that they had stumbled upon my paper recently and really enjoyed it, and so realizing that I hadn't ever really shared it here on LessWrong, I figured I would put it out there in case anyone else found it similarly useful or interesting.
It's also worth noting that LessWrong has a storied history with the Everett interpretation, with Yudkowsky also defending it quite vigorously. I actually cite Eliezer at one point in the paper—and I basically agree with what he said in his sequence—though I hope that if you bounced away from that sequence you'll find my paper more persuasive. Also, I include Everett's derivation of the Born rule, which is something that I think is quite important and that I expect even a lot of people very familiar with the Everett interpretation won't have seen before.
Abstract
We seek to present and defend the view that the interpretation of quantum mechanics is no more complicated than the interpretation of plate tectonics: that which is being studied is real, and that which the theory predicts is true. The view which holds that the mathematical formalism of quantum mechanics—without any additional postulates—is a complete description of reality is known as the Everett interpretation. We seek to defend the Everett interpretation of quantum mechanics as the most probable interpretation available. To accomplish this task, we analyze the history of the Everett interpretation, provide mathematical backing for its assertions, respond to criticisms that have been leveled against it, and compare it to its modern alternatives.
Introduction
One of the most puzzling aspects of quantum mechanics is the fact that, when one measures a system in a superposition of multiple states, it is only ever found in one of them. This puzzle was dubbed the “measurement problem,” and the first attempt at a solution was by Werner Heisenberg, who in 1927 proposed his theory of “wave function collapse.”[1] Heisenberg proposed that there was a cutoff length, below which systems were governed by quantum mechanics, and above which they were governed by classical mechanics. Whenever quantum systems encounter the cutoff point, the theory stated, they collapse down into a single state with probabilities following the squared amplitude, or Born, rule. Thus, the theory predicted that physics just behaved differently at different length scales. This traditional interpretation of quantum mechanics is usually referred to as the Copenhagen interpretation.
From the very beginning, the Copenhagen interpretation was seriously suspect. Albert Einstein was famously displeased with its lack of determinism, saying “God does not play dice,” to which Niels Bohr quipped in response, “Einstein, stop telling God what to do.”[2] As clever as Bohr’s answer is, Einstein—with his famous physical intuition—was right to be concerned. Though Einstein favored a hidden variable interpretation[3], which was later ruled out by Bell’s theorem[4], the Copenhagen interpretation nevertheless leaves open many questions. If physics behaves differently at different length scales, what is the cutoff point? What qualifies as a wave-function-collapsing measurement? How can physics behave differently at different length scales, when macroscopic objects are made up of microscopic objects? Why is the observer not governed by the same laws of physics as the system being observed? Where do the squared amplitude Born probabilities come from? If the physical world is fundamentally random, how is the world we see selected from all the possibilities? How could one explain the applicability of quantum mechanics to macroscopic systems, such as Chandrasekhar’s insight in 1930 that modeling neutron stars required the entire star to be treated as a quantum system?[5]
The Everett Interpretation of Quantum Mechanics
Enter the Everett Interpretation. In 1956, Hugh Everett III, then a doctoral candidate at Princeton, had an idea: if you could find a way to explain the phenomenon of measurement from within wave mechanics, you could do away with the extra postulate of wave function collapse, and thus many of the problems of the Copenhagen interpretation. Everett worked on this idea under his thesis advisor, Einstein-prize-winning theoretical physicist John Wheeler, who would later publish a paper in support of Everett’s theory.[6] In 1957, Everett finished his thesis “The Theory of the Universal Wave Function,”[7] published as the “‘Relative State’ Formulation of Quantum Mechanics.”[8] In his thesis, Everett succeeded in deriving every one of the strange quirks of the Copenhagen interpretation—wave function collapse, the apparent randomness of measurement, and even the Born rule—from purely wave mechanical grounds, as we will do in the "Mathematics of the Everett Interpretation" section.
Everett’s derivation relied on what was at the time a controversial application of quantum mechanics: the existence of wave functions containing observers themselves. Everett believed that there was no reason to restrict the domain of quantum mechanics to only small, unobserved systems. Instead, Everett proposed that any system, even the system of the entire universe, could be encompassed in a single, albeit often intractable, “universal wave function.”
Modern formulations of the Everett interpretation reduce his reasoning down to two fundamental ideas:[9][10][11][12][13]
Specifically, the first statement precludes wave function collapse and demands that we continue to use the same wave mechanics for all systems, even those with observers, and the second statement demands that we accept the physical implications of doing so. The Everett interpretation is precisely that which is implied by these two statements.
Importantly, neither of these two principles are additional assumptions on top of traditional quantum theory—instead, they are simplifications of existing quantum theory, since they act only to remove the prior ad-hoc postulates of wave function collapse and the non-universal applicability of the wave equation.[11][14] The beauty of the Everett interpretation is the fact that we can remove the postulates of the Copenhagen interpretation and still end up with a theory that works.
DeWitt’s Multiple Worlds
Removing the Copenhagen postulates had some implications that did not mesh well with many physicists’ existing physical intuitions. If one accepted Everett’s universal wave function, one was forced to accept the idea that macroscopic objects—cats, people, planets, stars, galaxies, even the entire universe—could be in a superposition of many states, just as microscopic objects could. In other words, multiple different versions of the universe—multiple worlds, so to speak—could exist simultaneously. It was for this reason that Einstein-prize-winning physicist Bryce DeWitt, a supporter of the Everett interpretation, dubbed Everett’s theory of the universal wave function the “multiworld” (or now more commonly “multiple worlds”) interpretation of quantum mechanics.[9]
While the idea of multiple worlds may at first seem strange, to Everett, it was simply an extension of the normal laws of quantum mechanics. Simultaneous superposition of states is something physicists already accept for microscopic systems whenever they do quantum mechanics—by virtue of the overwhelming empirical evidence in favor of it. Not only that, but evidence keeps coming out demonstrating superpositions at larger and larger length scales. In 1999 it was demonstrated, for example, that Carbon-60 molecules can be put into a superposition.[15]. While it is unlikely that a superposition of such a macroscopic object as Schrodinger’s cat will ever be conclusively demonstrated, due to the difficulty in isolating such a system from the outside world, it is likely that the trend of demonstrating superposition at larger and larger length scales will continue. It seems that to not accept that a cat could be in a superposition, even if we can never demonstrate it, however, is a failure of induction—a rejection of an empirically-demonstrated trend.
While the Everett interpretation ended up implying the existence of multiple worlds, this was never Everett’s starting point. The “multiple worlds” of the Everett interpretation were not added to traditional quantum mechanics as new postulates, but rather fell out from the act of taking away the existing ad-hoc postulates of the Copenhagen interpretation—a consequence of taking the wave function seriously as a fundamental physical entity. In Everett’s own words, “The aim is not to deny or contradict the conventional formulation of quantum theory, which has demonstrated its usefulness in an overwhelming variety of problems, but rather to supply a new, more general and complete formulation, from which the conventional interpretation can be deduced.”[8] Thus, it is not surprising that Stephen Hawking and Nobel laureate Murray Gell-Mann, supporters of the Everett interpretation, have expressed reservations with the name “multiple worlds interpretation,” and therefore we will continue to refer to the theory simply as the Everett interpretation instead.[16]
The Nature of Observation
Accepting the Everett interpretation raises an important question: if the macroscopic world can be in a superposition of multiple states, what differentiates them? Stephen Hawking has the answer: “in order to determine where one is in space-time one has to measure the metric and this act of measurement places one in one of the various different branches of the wave function in the Wheeler-Everett interpretation of quantum mechanics.”[17] When we perform an observation on a system whose state is in a superposition of eigenfunctions, a version of us sees each different, possible eigenfunction. The different worlds are defined by the different eigenfunctions that are observed.
We can show this, as Everett did, just by acknowledging the existence of universal, joint system-observer wave functions.[7][8] Before measuring the state of a system in a superposition, the observer and the system are independent—we can get their joint wave function simply by multiplying together their individual wave functions. After measurement, however, the two become entangled—that is, the state of the observer becomes dependent on the state of the system that was observed. The result is that for each eigenfunction in the system’s superposition, the observer’s wave function evolves differently. Thus, we can no longer express their joint wave function as the product of their individual wave functions. Instead, we are forced to express the joint wave function as a sum of different components, one for each possible eigenfunction of the system that could be observed. These different components are the different “worlds” of the Everett interpretation, with the only difference between them being which eigenfunction of the system was observed. We will formalize this reasoning in the "The Apparent Collapse of The Wave Function" section.
We are still left with the question, however, of why we experience a particular probability of seeing some states over others, if every state that can be observed is observed. Informally, we can think of the different worlds—the different possible observations—as being “weighted” by their squared amplitudes, and which one of the versions of us we are as a random choice from that weighted distribution. Formally, we can prove that under the Everett interpretation, if an observer interacts with many systems each in a superposition of multiple states, the distribution of states they see will follow the Born rule.[7][8][18][11][19][14] A portion of Everett’s proof of this fact is included in the "The Born Probability Rule" section.
The Mathematics of the Everett Interpretation
Previously, we asserted that universally-applied wave mechanics was sufficient, without ad-hoc postulates such as wave function collapse, to imply all the oddities of the Copenhagen interpretation. We will now prove that assertion. In this section, as per the Everett interpretation, we will accept that basic wave mechanics is obeyed for all physical systems, including those containing observers. From that assumption, we will show that the apparent phenomena of wave function collapse, random measurement, and the Born Rule follow. The proofs given below are adopted from Everett’s original paper.[7][8]
The Apparent Collapse of The Wave Function
Suppose we have a system S with eigenfunctions {φi} and initial state φ=∑iaiφi. Consider an observer O with initial state ψ. Let ψi,j,… be the state of O after observing eigenfunctions φi,φj,… of S. Since we would like to demonstrate how repeated measurements see a collapsed wave function, we will assume that repeated measurement is possible, and thus that the states φi of S remain unchanged after observation. As we are working under the Everett interpretation, we will let ourselves define a joint system-observer wave function Ψ with initial configuration Ψ0=ψφ=ψ∑iaiφi Then, our goal is to understand what happens to Ψ when O repeatedly observes S. Thus, we will define Ψn to represent the state of Ψ after n∈N independent observations of S are performed by O.
Consider the simple case where φ=φ0 and thus we are in initial state Ψ0=ψφ0. In this case, by our previous definition of ψi and requirement that φi remain unchanged, we can write the state after the observation as Ψ1=ψ0φ0. Since quantum mechanics is linear, and the eigenfunctions φi are orthogonal, it must be that this same process occurs for each φi.
Thus, by the principle of superposition, we can write Ψ1 in its general form as Ψ1=∑iaiψiφi For the next observation, each ψi will once again see the same φi, since it has not changed state. As previously defined, we use the notation ψi,i to denote the state of O after observing S in state φi twice. Thus, we can write Ψ2 as Ψ2=∑iaiψi,iφi and more generally, we can write Ψn as Ψn=∑iaiψi,i,…,iφi where i is repeated n times in i,i,…,i.
Thus, once a measurement of S has been performed, every subsequent measurement will see the same eigenfunction, even though all eigenfunctions continue to exist. We can see this from the fact that the same i is repeated in each state ψi,i,…,i of O. In this way, we see how, despite the fact that the original wave function φ=∑iaiφi for S is in a superposition of many eigenfunctions, once a measurement has been performed, each subsequent measurement will always see the same eigenfunction.
Note that there is no longer a single, independent state ψ of O. Instead, there are many ψi,i,…,i, one for each eigenfunction. What does that mean? It means that for every eigenfunction φi of S, there is a corresponding state ψi,i,…,i of O wherein O sees that eigenfunction. Thus, one is required to accept that there are many observers Oi, with corresponding state ψi,i,…,i, each one seeing a different eigenfunction φi. This is the origin of the Everett interpretation's "multiple worlds."
From the perspective of each Oi in this scenario it will appear as if φ has "collapsed" from a complex superposition ∑iaiφi into a single eigenfunction φi. As we can see from the joint wave function, however, that is not the case—in fact, the entire superposition still exists. What has changed is only that ψ, the state of O, is no longer independent of that superposition, and has instead become entangled with it.
The Apparent Randomness of Measurement
Suppose we now have many such systems S, which we will denote Sn where n∈N. Consider O from before, but with the modification that instead of repeatedly observing a single S, O observes different Sn in each measurement, such that Ψn is the joint system-observer wave function after measuring the nth Sn.
As before, we will define the initial joint wave function Ψ0 as
Ψ0=ψ∑i1,i2,…,in(ai1,i2,…,inφi1(x1)φi2(x2)⋯φin(xn))
where we are summing over all possible combinations of eigenfunctions for the different systems Sn with arbitrary coefficients ai1,i2,…,in for each combination.
Then, as before, we can use the principle of superposition to find Ψ1 as
Ψ1=∑i1,i2,…,in(ψi1ai1,i2,…,inφi1(x1)φi2(x2)⋯φin(xn))
since the first measurement will see the state φi1 of S1. More generally, we can write Ψn as
Ψn=∑i1,i2,…,in(ψi1,i2,…,inai1,i2,…,inφi1(x1)φi2(x2)⋯φin(xn))
following the same principle, as each measurement of an Sn will see the corresponding state φin.
Thus, when subsequent measurements of identical systems Sn are performed, the resulting sequence of eigenfunctions observed by O in each ψ appear random (according to what distribution we will show in the next subsection), since there is no structure to the sequences i1,i2,…,in. This appearance of randomness is true even though the entire process is completely deterministic. If, alternatively, O was to return to a previously-measured Sn, we would get a repeat of the first analysis, wherein O would always see the same state as was previously measured.
The Born Probability Rule
As before, consider a system S in state ∑iaiφi. To be able to talk about a probability for an observer O to see state φi, we need some function P(ai) that will serve as a measure of that probability.
Since we know that quantum mechanics is invariant up to an overall phase, we will impose the condition on P that it must satisfy the equation
P(ai)=P(√a∗iai)=P(|ai|)
Furthermore, by the linearity of quantum mechanics, we will impose the condition on P such that for aφ defined as aφ=∑iaiφi P must satisfy the equation P(a)=∑iP(ai)
Together, these two conditions fully specify what function P must be. Assuming φ is normalized, such that ∑iφ∗iφi=1, it must be that
a∗a=∑ia∗iai
or equivalently
|a|=√∑i|ai|2
such that
P(|a|)=P⎛⎜⎝√∑i|ai|2⎞⎟⎠
which, using the phase invariance condition that P(|a|) = P(a), gives
P(a)=P⎛⎜⎝√∑i|ai|2⎞⎟⎠
Then, from the linearity condition, we have P(a)=∑iP(ai) which, by the phase invariance condition, is equivalent to
P(a)=∑iP(√|ai|2)
Putting it all together, we get
P(a)=P⎛⎜⎝√∑i|ai|2⎞⎟⎠=∑iP(√|ai|2)
then, defining a new function g(x)=P(√x), yields g(∑i|ai|2)=∑ig(|ai|2) which implies that g must be a linear function such that for some constant c g(x)=cx Therefore, since P(x)=g(x2), P(x)=cx2 which, imposing the phase invariance condition, becomes P(x)=c|x|2 which, where c is normalized to 1, is the Born rule.
The fact that this measure is a probability, beyond that it is the only measure that can be, is deserving of further proof. The concept of probability is notoriously hard to define, however, and without a definition of probability, it is just as meaningful to call P something as arbitrary as the “stallion” of the wave function as the “probability.”[2] Nevertheless, for nearly every reasonable probability theory that exists, such proofs have been provided. Everett provided a proof based on the standard frequentist definition of probability[7][8], David Deutsch (Oxford theoretical physicist) has provided a proof based on game theory[18], and David Wallace (USC theoretical physicist) has provided a proof based on decision theory[11]. For any reasonable definition of probability, wave mechanics is able to show that the above measure satisfies it in the limit without any additional postulates.[19][14][20]
Arguments For and Against the Everett Interpretation
David Deutsch, 1996
Falsifiability and Empiricism
Perhaps the most common criticism of the Everett interpretation is the claim that it is not falsifiable, and thus falls outside the realm of empirical science.[22] In fact, this claim is simply not true—many different methods for testing the Everett interpretation have been proposed, and, a great deal of empirical data regarding the Everett interpretation is already available.
One such method we have already discussed: the Everett interpretation removes the Copenhagen interpretation’s postulate that the wave function must collapse at a particular length scale. Were it ever to be conclusively demonstrated that superposition was impossible past some point, the Everett interpretation would be disproved. Thus, every demonstration performed of superposition at larger and larger length scales—such as for Carbon 60 as was previously mentioned[15]—is a test of the Everett interpretation. Arguably, it is the Copenhagen interpretation which is unfalsifiable, since it makes no claim about where the boundary lies at which wave function collapse occurs, and thus proponents can respond to the evidence of larger superpositions simply by changing their theory and moving their proposed boundary up.
Another method of falsification regards the interaction between the Everett interpretation and quantum gravity. The Everett interpretation makes a definitive prediction that gravity must be quantized. Were gravity not quantized—not wrapped up in the wave function like all the other forces—and instead simply a background metric for the entire wave function, we would be able to detect the gravitational impact of the other states we were in a superposition with.[10][23] In 1957, Richard Feynman, who would later come to explicitly support the Everett interpretation[16] as well as become a Nobel laureate, presented an early version of the above argument as a reason to believe in quantum gravity, arguing, “There is a bare possibility (which I shouldn’t mention!) that quantum mechanics fails and becomes classical again when the amplification gets far enough [but] if you believe in quantum mechanics up to any level then you have to believe in gravitational quantization.”[24]
Another proposal concerns differing probabilities of finding ourselves in the universe we are in depending on whether the Everett interpretation holds or not. If the Everett interpretation is false, and the universe only has a single state, there is only one state for us to find ourselves in, and thus we would expect to find ourselves in an approximately random universe. On the other hand, if the Everett interpretation is true, and there are many different states that the universe is in, we could find ourselves in any of them, and thus we would expect to find ourselves in one which was more disposed than average towards the existence of life. Approximate calculations of the relative probability of the observed universe based on the Hartle-Hawking boundary condition strongly support the Everett interpretation.[10]
Finally, as we made a point of being clear about in the "The Everett Interpretation of Quantum Mechanics" section, the Everett interpretation is simply a consequence of taking the wave function seriously as a physical entity. Thus, it is somewhat unfair to ask the Everett interpretation to achieve falsifiability independently of the theory—quantum mechanics—which implies it.[22] If a new theory were proposed that said quantum mechanics stopped working outside of the future light cone of Earth, we would not accept it as a new physical controversy—we would say that, unless there is incredibly strong proof otherwise, we should by default assume that the same laws of physics apply everywhere. The Everett interpretation is just that default—it is only by historical accident that it happened to be discovered after the Copenhagen interpretation. Thus, to the extent that one has confidence in the universal applicability of the principles of quantum mechanics, one should have equal confidence in the Everett interpretation, since it is a logical consequence. It is in fact all the more impressive—and tantamount to its importance to quantum mechanics—that the Everett interpretation manages to achieve falsifiability and empirical support despite its primary virtue of simply saying that quantum mechanics be applied universally.
Simplicity
Another common objection to the Everett interpretation is that it “postulates too many universes,” which Sean Carroll, a Caltech cosmologist and supporter of the Everett interpretation, calls “the basic silly objection.”[25] At this point, it should be very clear why this objection is silly: the Everett interpretation postulates no such thing—the existence of “many universes” is an implication, not a postulate, of the theory. Opponents of the Everett interpretation, however, have accused it of a lack of simplicity on the grounds that adding in all those additional universes is unnecessary added complexity, and since by the principle of Occam’s razor the simplest explanation is probably correct, the Everett interpretation can be rejected.[26]
In fact, Occam’s razor is an incredibly strong argument in favor of the Everett interpretation. To explain this, we will first need to formalize what we mean by Occam’s razor, which will require some measure of theoretical computer science. Specifically, we will make use of Solomonoff’s theory of inductive inference: the best, most general framework we have for comparing the probability of empirically indistinguishable physical theories.[27][28][29][3] To use Solomonoff’s formalism, only one assumption is required of us: under some encoding scheme, competing theories of the universe can be modeled as programs. This assumption does not imply that the universe must be computable, only that it can be computably described, which all physical theories capable of being written down must abide by. From this assumption, and the axioms of probability theory, Solomonoff induction can be derived.[27]
Solomonoff induction tells us that, if we have a set of programs[4] {Ti} which encode for empirically indistinguishable physical theories, the probability P of the theory described by a given program Ti with length in bits (0s and 1s) |Ti| is given by
P(Ti)∼2−|Ti|
up to a constant normalization factor calculated across all the {Ti} to make the probabilities sum to 1.[27] We can see how this makes intuitive sense, since if we are predicting an arbitrary system, and thus have no information about the correctness of a program implementing a theory other than its length in bits, we are forced to assign equal probability to each of the two options for each bit, 0 and 1, and thus each additional bit adds a factor of 12 to the total probability of the program. Furthermore, we can see how Solomonoff induction serves as a formalization of Occam's razor, since it gives us a way of calculating how much to discount longer, more complex theories in favor of shorter, simpler ones.
Now, we will attempt to apply this formalism to assign probabilities to competing interpretations of quantum mechanics, which we will represent as elements of the set {Ti}. Let W be the shortest program which computes the wave equation. Since the wave equation is a component of all quantum theories, it must be that |W| ≤ |Ti|. Thus, the smallest that any Ti could possibly be is |W|, such that any Ti of length |W| is at least twice as probable as a Ti of any other length. The Everett interpretation is such a Ti, since it requires nothing else beyond wave mechanics, and follows directly from it. Therefore, from the perspective of Solomonoff induction, the Everett interpretation is provably optimal in terms of program length, and thus also in terms of probability.
To get a sense of the magnitude of these effects, we will attempt to approximate how much less probable the Copenhagen interpretation is than the Everett interpretation. We will represent the Copenhagen interpretation C as made of three parts: W, wave mechanics; O, a machine which determines when to collapse the wave function; and L, classical mechanics. Then, where the Everett interpretation E is just W, we can write their relative probabilities as
P(C)P(E)=2−|W|−|O|−|L|2−|W|=2−|O|−|L|
How large are O and L? As a quick Fermi estimate for L, we will take Newton’s three laws of motion, Einstein’s general relativistic field equation, and Maxwell’s four equations of electromagnetism as the principles of classical mechanics, for a total of 8 fundamental equations. Assume the minimal implementation for each one averages 100 bits—a very modest estimate, considering the smallest Chess program ever written is 3896 bits long.[30] Then, the relative probability is at most
P(C)P(E)=2−|O|−|L|<2−|L|≈2−800≈2⋅10−241
which is about the probability of picking four random atoms in the universe and getting the same one each time, and is thus so small as to be trivially dismissible.
The Arrow of Time
Another objection to the Everett interpretation is that it is time-symmetric. Since the Everett interpretation is just the wave equation, its time symmetry follows from the fact that the Schrodinger equation is time-reversal invariant, or more technically, charge-parity-time-reversal (CPT) invariant. The Copenhagen interpretation, however, is not, since wave function collapse is a fundamentally irreversible event.[31] In fact, CPT symmetry is not the only natural property that wave function collapse lacks that the Schrodinger equation has—wave function collapse breaks linearity, unitarity, differentiability, locality, and determinism.[13][12][16][32] The Everett interpretation, by virtue of consisting of nothing but the Schrodinger equation, preserves all of these properties. This is an argument in favor of the Everett interpretation, since there are strong theoretical and empirical reasons to believe that such symmetries are properties of the universe.[33][34][35][5]
Nevertheless, as mentioned above, it has been argued that the Copenhagen interpretation’s breaking of CPT symmetry is actually a point in its favor, since it supposedly explains the arrow of time, the idea that time does not behave symmetrically in our everyday experience.[31] Unfortunately for the Copenhagen interpretation, wave function collapse does not actually imply any of the desired thermodynamic properties of the arrow of time.[31] Furthermore, under the Everett interpretation, the arrow of time can be explained using the standard thermodynamic explanation that the universe started in a very low-entropy state.[36]
In fact, accepting the Everett interpretation gets rid of the need for the current state of the universe to be dependent on subtle initial variations in that low-entropy state.[36] Instead, the current state of the universe is simply one of the many different components of the wave function that evolved deterministically from that initial state. Thus, the Everett interpretation is even simpler—from a Solomonoff perspective—than was shown in the "Simplicity" section, since it forgoes the need for its program to specify a complex initial condition for the universe with many subtle variations.
Other Interpretations of Quantum Mechanics
Bryce DeWitt, 1970
Decoherence
It is sometimes proposed that wave mechanics alone is sufficient to explain the apparent phenomenon of wave function collapse without the need for the Everett interpretation’s multiple worlds. The justification for this assertion is usually based on the idea of decoherence. Decoherence is the mathematical result, following from the wave equation, that tightly-interacting superpositions tend to evolve into non-interacting superpositions.[37][38] Importantly, decoherence does not destroy the superposition—it merely “diagonalizes” it, which is to say, it removes the interference terms.[37] After decoherence, one is always still left with a superposition of multiple states.[39][40] The only way to remove the resulting superposition is to assume wave function collapse, which every statistical theory claiming to do away with multiple worlds has been shown to implicitly assume.[41][19] There is no escaping the logic presented in the "The Apparent Collapse of The Wave Function" section—if one accepts the universal applicability of the wave function, one must accept the multiple worlds it implies.
That is not to say that decoherence is not an incredibly valuable, useful concept for the interpretation of quantum mechanics, however. In the Everett interpretation, decoherence serves the very important role of ensuring that macroscopic superpositions—the multiple worlds of the Everett interpretation—are non-interacting, and that each one thus behaves approximately classically.[41][40] Thus, the simplest decoherence-based interpretation of quantum mechanics is in fact the Everett interpretation. From the Stanford Encyclopedia of Philosophy, “Decoherence as such does not provide a solution to the measurement problem, at least not unless it is combined with an appropriate interpretation of the theory [and it has been suggested that] decoherence is most naturally understood in terms of Everett-like interpretations.”[39] The discoverer of decoherence himself, German theoretical physicist Heinz-Dieter Zeh, is an ardent proponent of the Everett interpretation.[42][36]
Furthermore, we have given general arguments in favor of the existence of the multiple worlds implied by the Everett interpretation, which are all reasons to favor the Everett interpretation over any single-world theory. Specifically, calculations of the probability of the current state of the universe support the Everett interpretation[10], as does the fact that the Everett interpretation allows for the initial state of the universe to be simpler[36].
Consistent Histories
The consistent histories interpretation of quantum mechanics, owing primarily to Prof. Robert Griffiths, eschews probabilities over “measurement” in favor of probabilities over “histories,” which are defined as arbitrary sequences of events.[43] Consistent histories provides a way of formalizing what classical probabilistic questions make sense in a quantum domain and which do not—that is, which are consistent. Its explanation for why this consistency always appears at large length scales is based on the idea of decoherence, as discussed above.[43][44] In this context, consistent histories is a very useful tool for reasoning about probabilities in the context of quantum mechanics, and for providing yet another proof of the natural origin of the Born rule.
Proponents of consistent histories claim that it does not imply the multiple worlds of the Everett interpretation.[43] However, since the theory is based on decoherence, there are always multiple different consistent histories, which cannot be removed via any natural history selection criterion.[45][44] Thus, just as the wave equation implies the Everett interpretation, so too does consistent histories. To see this, we will consider the fact that consistent histories works because of Feynmann’s observation that the amplitude of any given final state can be calculated as the sum of the amplitudes along all the possible paths to that state.[44][46] Importantly, we know that two different histories—for example, the different branches of a Mach-Zender interferometer—can diverge and then later merge back together and interfere with each other. Thus, it is not in general possible to describe the state of the universe as a single history, since other, parallel histories can interfere and change how that state will later evolve. A history is great for describing how a state came to be, but not very useful for describing how it might evolve in the future. For that, including the other parallel histories—the full superposition—is necessary.
Once one accepts that the existence of multiple histories is necessary on a microscopic level, their existence on a macroscopic level follows—excluding them would require an extra postulate, which would make consistent histories equivalent to the Copenhagen interpretation. If such an extra postulate is not made, then the result is macroscopic superposition, which is to say, the Everett interpretation. This formulation of consistent histories without any extra postulates has been called the theory of “the universal path integral,” exactly mirroring Everett’s theory of the universal wave function.[46] The theory of the universal wave function—the Everett interpretation—is to the theory of the universal path integral as wave mechanics is to the sum-over-paths approach, which is to say that they are both equivalent formalisms with the same implications.
Pilot Wave Theory
The pilot wave interpretation, otherwise known as the de Broglie-Bohm interpretation, postulates that the wave function, rather than being physically real, is a background which “guides” otherwise classical particles.[47] As we saw with the Copenhagen interpretation, the obvious question to ask of the pilot wave interpretation is whether its extra postulate—in this case adding in classical particles—is necessary or useful in any way. The answer to this question is a definitive no. Heinz-Dieter Zeh says of the pilot wave interpretation, “Bohm’s pilot wave theory is successful only because it keeps Schrodinger’s (exact) wave mechanics unchanged, while the rest of it is observationally meaningless and solely based on classical prejudice.”[42] As we have previously shown in the "The Mathematics of the Everett Interpretation" section, wave mechanics is capable of solving all supposed problems of measurement without the need for any additional postulates. While it is true that pilot wave theory solves all these problems as well, it does so not by virtue of its classical add-ons, but simply by virtue of including the entirety of wave mechanics.[42][48]
Furthermore, since pilot wave theory has no collapse postulate, it does not even get rid of the existence of multiple words. If the universe computes the entirety of the wave function, including all of its multiple worlds, then all of the observers in those worlds should experience physical reality by the act of being computed—it is not at all clear how the classical particles could have physical reality and the rest of the wave function not.[21][42] In the words of David Deutsch, “pilot-wave theories are parallel-universes theories in a state of chronic denial. This is no coincidence. Pilot-wave theories assume that the quantum formalism describes reality. The multiplicity of reality is a direct consequence of any such theory.”[21]
However, since the extra classical particles only exist in one of these worlds, the pilot wave interpretation also does not resolve the problem of the low likelihood of the observed state of the universe[10] or the complexity of the required initial condition[36]. Thus, the pilot wave interpretation, despite being strictly more complicated than the Everett interpretation—both in terms of its extra postulate and the concerns above—produces exactly no additional explanatory power. Therefore, we can safely dismiss the pilot wave interpretation on the grounds of the same simplicity argument used against the Copenhagen interpretation in the "Simplicity" section.
Conclusion
Harvard theoretical physicist Sidney Coleman uses the following parable from Wittgenstein as an analogy for the interpretation of quantum mechanics: “‘Tell me,’ Wittgenstein asked a friend, ‘why do people always say, it was natural for man to assume that the sun went round the Earth rather than that the Earth was rotating?’ His friend replied, ‘Well, obviously because it just looks as though the Sun is going round the Earth.’ Wittgenstein replied, ‘Well, what would it have looked like if it had looked as though the Earth was rotating?’”[49] Of course, the answer is it would have looked exactly as it actually does! To our fallible human intuition, it seems as if we are seeing the sun rotating around the Earth, despite the fact that what we are actually seeing is a heliocentric solar system. Similarly, it seems as if we are seeing the wave function randomly collapsing around us, despite the fact that this phenomenon is entirely explained just from the wave equation, which we already know empirically is a law of nature.
It is perhaps unfortunate that the Everett interpretation ended up implying the existence of multiple worlds, since this fact has led to many incorrectly viewing the Everett interpretation as a fanciful theory of alternative realities, rather than the best, simplest theory we have as of yet for explaining measurement in quantum mechanics. The Everett interpretation’s greatest virtue is the fact that it is barely even an interpretation of quantum mechanics, holding as its most fundamental principle that the wave equation can interpret itself. In the words of David Wallace: “If I were to pick one theme as central to the tangled development of the Everett interpretation of quantum mechanics, it would probably be: the formalism is to be left alone. What distinguished Everett’s original paper both from the Dirac-von Neumann collapse-of-the-wavefunction orthodoxy and from contemporary rivals such as the de Broglie-Bohm theory was its insistence that unitary quantum mechanics need not be supplemented in any way (whether by hidden variables, by new dynamical processes, or whatever).”[11]
There is a tendency of many physicists to describe the Everett interpretation simply as one possible answer to the measurement problem. It should hopefully be clear at this point why that view should be rejected—the Everett interpretation is not simply yet another solution to the measurement problem, but rather a straightforward conclusion of quantum mechanics itself that shows that the measurement problem should never have been a problem in the first place. Without the Everett interpretation, one is forced to needlessly introduce complex, symmetry-breaking, empirically-unjustifiable postulates—either wave function collapse or pilot wave theory—just to explain what was already explicable under basic wave mechanics. The Everett interpretation is not just another possible way of interpreting quantum mechanics, but a necessary component of any quantum theory that wishes to explain the phenomenon of measurement in a natural way. In the words of John Wheeler, Everett’s thesis advisor, “No escape seems possible from [Everett's] relative state formulation if one wants to have a complete mathematical model for the quantum mechanics that is internal to an isolated system. Apart from Everett’s concept of relative states, no self-consistent system of ideas [fully explains the universe].”[6]
References
[1] Heisenberg, W. (1927). THE ACTUAL CONTENT OF QUANTUM THEORETICAL KINEMATICS AND MECHANICS. Zeitschrift für Physik.
[2] Anon. The solvay conference, probably the most intelligent picture ever taken, 1927.
[3] Einstein, A., Podolsky, B. and Rosen, N. (1935). Can quantum-mechanical description of physical reality be considered complete? Physical Review.
[4] Greenberger, D. M. (1990). Bell’s theorem without inequalities. American Journal of Physics.
[5] Townsend, J. (2010). Quantum physics: A fundamental approach to modern physics. University Science Books.
[6] Wheeler, J. A. (1957). Assessment of everett’s “relative state” formulation of quantum theory. Reviews of Modern Physics.
[7] Everett, H. (1957). THE THEORY OF THE UNIVERSAL WAVE FUNCTION. Princeton University Press.
[8] Everett, H. (1957). “Relative state” formulation of quantum mechanics. Reviews of Modern Physics.
[9] DeWitt, B. S. (1970). Quantum mechanics and reality. Physics Today.
[10] Barrau, A. (2015). Testing the everett interpretation of quantum mechanics with cosmology.
[11] Wallace, D. (2007). Quantum probability from subjective likelihood: Improving on deutsch’s proof of the probability rule. Studies in History and Philosophy of Science.
[12] Saunders, S., Barrett, J., Kent, A. and Wallace, D. (2010). Many worlds?: Everett, quantum theory, & reality. Oxford University Press.
[13] Wallace, D. (2014). The emergent multiverse. Oxford University Press.
[14] Wallace, D. (2006). Epistemology quantized: Circumstances in which we should come to believe in the everett interpretation. The British Journal for the Philosophy of Science.
[15] Arndt, M., Nairz, O., Vos-Andreae, J., Keller, C., Zouw, G. van der and Zeilinger, A. (1999). Wave-particle duality of C60 molecules. Nature.
[16] Price, M. C. (1995). THE EVERETT FAQ.
[17] Hawking, S. S. (1975). Black holes and thermodynamics. Physical Review D.
[18] Deutsch, D. (1999). Quantum theory of probability and decisions. Proceedings of the Royal Society of London.
[19] Wallace, D. (2003). Everettian rationality: Defending deutsch’s approach to probability in the everett interpretation. Studies in History and Philosophy of Science.
[20] Clark, C. (2010). A theoretical introduction to wave mechanics.
[21] Deutsch, D. (1996). Comment on lockwood. The British Journal for the Philosophy of Science.
[22] Carroll, S. (2015). The wrong objections to the many-worlds interpretation of quantum mechanics.
[23] Hartle, J. B. (2014). SPACETIME QUANTUM MECHANICS AND THE QUANTUM MECHANICS OF SPACETIME.
[24] Zeh, H. D. (2011). Feynman’s interpretation of quantum theory. The European Physical Journal.
[25] Carroll, S. (2014). Why the many-worlds formulation of quantum mechanics is probably correct.
[26] Rae, A. I. M. (2009). Everett and the born rule. Studies in History and Philosophy of Science.
[27] Solomonoff, R. J. (1960). A PRELIMINARY REPORT ON a GENERAL THEORY OF INDUCTIVE INFERENCE.
[28] Soklakov, A. N. (2001). Occam’s razor as a formal basis for a physical theory.
[29] Altair, A. (2012). An intuitive explanation of solomonoff induction.
[30] Kelion, L. (2015). Coder creates smallest chess game for computers.
[31] Bitbol, M. (1988). THE CONCEPT OF MEASUREMENT AND TIME SYMMETRY IN QUANTUM MECHANICS. Philosophy of Science.
[32] Yudkowsky, E. (2008). The quantum physics sequence: Collapse postulates.
[33] Ellis, J. and Hagelin, J. S. (1984). Search for violations of quantum mechanics. Nuclear Physics.
[34] Ellis, J., Lopez, J. L., Mavromatos, N. E. and Nanopoulos, D. V. (1996). Precision tests of CPT symmetry and quantum mechanics in the neutral kaon system. Physical Review D.
[35] Agrawal, M. (2003). Linearity in quantum mechanics.
[36] Zeh, H. D. (1988). Measurement in bohm’s versus everett’s quantum theory. Foundations of Physics.
[37] Zurek, W. H. (2002). Decoherence and the transition from quantum to classical—revisited. Los Alamos Science.
[38] Schlosshauer, M. (2005). Decoherence, the meausrement problem, and interpretations of quantum mechanics.
[39] Bacciagaluppi, G. (2012). The role of decoherence in quantum mechanics. Stanford Encyclopedia of Philosophy.
[40] Wallace, D. (2003). Everett and structure. Studies in History and Philosophy of Science.
[41] Zeh, H. D. (1970). On the interpretation of measurement in quantum theory. Foundations of Physics.
[42] Zeh, H. D. (1999). Why bohm’s quantum theory? Foundations of Physics Letters.
[43] Griffiths, R. B. (1984). Consistent histories and the interpretation of quantum mechanics. Journal of Statistical Physics.
[44] Gell-Mann, M. and Hartle, J. B. (1989). Quantum mechanics in the light of quantum cosmology. Int. Symp. Foundations of Quantum Mechanics.
[45] Wallden, P. (2014). Contrary inferences in consistent histories and a set selection criterion.
[46] Lloyd, S. and Dreyer, O. (2015). The universal path integral. Quantum Information Processing.
[47] Bohm, D. J. and Hiley, B. J. (1982). The de broglie pilot wave theory and the further development of new insights arising out of it. Foundations of Physics.
[48] Brown, H. R. and Wallace, D. (2005). Solving the measurement problem: De broglie-bohm loses out to everett. Foundations of Physics.
[49] Coleman, S. (1994). Quantum mechanics in your face.
The relativistic variant, to be precise. ↩︎
Fun fact: this paper was part of a paper contest that all undergraduate physics students at Harvey Mudd College participate in (which this paper won) for which there's a longstanding tradition (perpetuated by the students) that each student get a random word and be challenged to include it in their paper. My word was “stallion.” ↩︎
In some of these sources, the equivalent formalism of Kolmogorov complexity is used instead. ↩︎
To be precise, these should be universal Turing machine programs. ↩︎