Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Many Worlds, One Best Guess

10 Post author: Eliezer_Yudkowsky 11 May 2008 08:32AM

Previously in series: Collapse Postulates
Followup toBell's Theorem, Spooky Action at a Distance, Quantum Non-Realism, Decoherence is Simple, Falsifiable and Testable

If you look at many microscopic physical phenomena—a photon, an electron, a hydrogen atom, a laser—and a million other known experimental setups—it is possible to come up with simple laws that seem to govern all small things (so long as you don't ask about gravity).  These laws govern the evolution of a highly abstract and mathematical object that I've been calling the "amplitude distribution", but which is more widely referred to as the "wavefunction".

Now there are gruesome questions about the proper generalization that covers all these tiny cases.  Call an object 'grue' if it appears green before January 1, 2020 and appears blue thereafter.  If all emeralds examined so far have appeared green, is the proper generalization, "Emeralds are green" or "Emeralds are grue"?

The answer is that the proper generalization is "Emeralds are green".  I'm not going to go into the arguments at the moment.  It is not the subject of this post, and the obvious answer in this case happens to be correctThe true Way is not stupid: however clever you may be with your logic, it should finally arrive at the right answer rather than a wrong one.

In a similar sense, the simplest generalizations that would cover observed microscopic phenomena alone, take the form of "All electrons have spin 1/2" and not "All electrons have spin 1/2 before January 1, 2020" or "All electrons have spin 1/2 unless they are part of an entangled system that weighs more than 1 gram."

When we turn our attention to macroscopic phenomena, our sight is obscured.  We cannot experiment on the wavefunction of a human in the way that we can experiment on the wavefunction of a hydrogen atom.  In no case can you actually read off the wavefunction with a little quantum scanner.  But in the case of, say, a human, the size of the entire organism defeats our ability to perform precise calculations or precise experiments—we cannot confirm that the quantum equations are being obeyed in precise detail.

We know that phenomena commonly thought of as "quantum" do not just disappear when many microscopic objects are aggregated.  Lasers put out a flood of coherent photons, rather than, say, doing something completely different.  Atoms have the chemical characteristics that quantum theory says they should, enabling them to aggregate into the stable molecules making up a human.

So in one sense, we have a great deal of evidence that quantum laws are aggregating to the macroscopic level without too much difference.  Bulk chemistry still works.

But we cannot directly verify that the particles making up a human, have an aggregate wavefunction that behaves exactly the way the simplest quantum laws say.  Oh, we know that molecules and atoms don't disintegrate, we know that macroscopic mirrors still reflect from the middle.  We can get many high-level predictions from the assumption that the microscopic and the macroscopic are governed by the same laws, and every prediction tested has come true.

But if someone were to claim that the macroscopic quantum picture, differs from the microscopic one, in some as-yet-untestable detail—something that only shows up at the unmeasurable 20th decimal place of microscopic interactions, but aggregates into something bigger for macroscopic interactions—well, we can't prove they're wrong.  It is Occam's Razor that says, "There are zillions of new fundamental laws you could postulate in the 20th decimal place; why are you even thinking about this one?"

If we calculate using the simplest laws which govern all known cases, we find that humans end up in states of quantum superposition, just like photons in a superposition of reflecting from and passing through a half-silvered mirror.  In the Schrödinger's Cat setup, an unstable atom goes into a superposition of disintegrating, and not-disintegrating.  A sensor, tuned to the atom, goes into a superposition of triggering and not-triggering.  (Actually, the superposition is now a joint state of [atom-disintegrated * sensor-triggered] + [atom-stable * sensor-not-triggered].)  A charge of explosives, hooked up to the sensor, goes into a superposition of exploding and not exploding; a cat in the box goes into a superposition of being dead and alive; and a human, looking inside the box, goes into a superposition of throwing up and being calm.  The same law at all levels.

Human beings who interact with superposed systems will themselves evolve into superpositions.  But the brain that sees the exploded cat, and the brain that sees the living cat, will have many neurons firing differently, and hence many many particles in different positions.  They are very distant in the configuration space, and will communicate to an exponentially infinitesimal degree.  Not the 30th decimal place, but the 1030th decimal place.  No particular mind, no particular cognitive causal process, sees a blurry superposition of cats.

The fact that "you" only seem to see the cat alive, or the cat dead, is exactly what the simplest quantum laws predict.  So we have no reason to believe, from our experience so far, that the quantum laws are in any way different at the macroscopic level than the microscopic level.

And physicists have verified superposition at steadily larger levels. Apparently an effort is currently underway to test superposition in a 50-micron object, larger than most neurons.

The existence of other versions of ourselves, and indeed other Earths, is not supposed additionally.  We are simply supposing that the same laws govern at all levels, having no reason to suppose differently, and all experimental tests having succeeded so far.  The existence of other decoherent Earths is a logical consequence of the simplest generalization that fits all known facts.  If you think that Occam's Razor says that the other worlds are "unnecessary entities" being multiplied, then you should check the probability-theoretic math; that is just not how Occam's Razor works.

Yet there is one particular puzzle that seems odd, in trying to extend microscopic laws universally incluing to superposed humans:

If we try to get probabilities by counting the number of distinct observers, then there is no obvious reason why the integrated squared modulus of the wavefunction should correlate with statistical experimental results.  There is no known reason for the Born probabilities, and it even seems that, a priori, we would expect a 50/50 probability of any binary quantum experiment going both ways, if we just counted observers.

Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude ("worlds") are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out.  I consider this an interesting possibility, because it is so normal.

(I myself have had recent thoughts along a different track:  If I try to count observers the obvious way, I get strange-seeming results in general, not just in the case of quantum physics.  If, for example, I split my brain into a trillion similar parts, conditional on winning the lottery while anesthetized; allow my selves to wake up and perhaps differ to small degrees from each other; and then merge them all into one self again; then counting observers the obvious way says I should be able to make myself win the lottery (if I can split my brain and merge it, as an uploaded mind might be able to do).

In this connection, I find it very interesting that the Born rule does not have a split-remerge problem.  Given unitary quantum physics, Born's rule is the unique rule that prevents "observers" from having psychic powers—which doesn't explain Born's rule, but is certainly an interesting fact.  Given Born's rule, even splitting and remerging worlds would still lead to consistent probabilities.  Maybe physics uses better anthropics than I do!

Perhaps I should take my cues from physics, instead of trying to reason it out a priori, and see where that leads me?  But I have not been led anywhere yet, so this is hardly an "answer".)

Wallace, Deutsch, and others try to derive Born's Rule from decision theory.  I am rather suspicious of this, because it seems like there is a component of "What happens to me?" that I cannot alter by modifying my utility function.  Even if I didn't care at all about worlds where I didn't win a quantum lottery, it still seems to me that there is a sense in which I would "mostly" wake up in a world where I didn't win the lottery.  It is this that I think needs explaining.

The point is that many hypotheses about the Born probabilities have been proposed.  Not as many as there should be, because the mystery was falsely marked "solved" for a long time.  But still, there have been many proposals.

There is legitimate hope of a solution to the Born puzzle without new fundamental laws.  Your world does not split into exactly two new subprocesses on the exact occasion when you see "ABSORBED" or "TRANSMITTED" on the LCD screen of a photon sensor.  We are constantly being superposed and decohered, all the time, sometimes along continuous dimensions—though brains are digital and involve whole neurons firing, and fire/not-fire would be an extremely decoherent state even of a single neuron...  There would seem to be room for something unexpected to account for the Born statistics—a better understanding of the anthropic weight of observers, or a better understanding of the brain's superpositions—without new fundamentals.

We cannot rule out, though, the possibility that a new fundamental law is involved in the Born statistics.

As Jess Riedel puts it:

If there's one lesson we can take from the history of physics, it's that everytime new experimental "regimes" are probed (e.g. large velocities, small sizes, large mass densities, large energies), phenomena are observed which lead to new theories (special relativity, quantum mechanics, general relativity, and the standard model, respectively).

"Every time" is too strong.  A nitpick, yes, but also an important point: you can't just assume that any particular law will fail in a new regime.  But it's possible that a new fundamental law is involved in the Born statistics, and that this law manifests only in the 20th decimal place at microscopic levels (hence being undetectable so far) while aggregating to have substantial effects at macroscopic levels.

Could there be some law, as yet undiscovered, that causes there to be only one world?

This is a shocking notion; it implies that all our twins in the other worlds—all the different versions of ourselves that are constantly split off, not just by human researchers doing quantum measurements, but by ordinary entropic processes—are actually gone, leaving us alone!  This version of Earth would be the only version that exists in local space!  If the inflationary scenario in cosmology turns out to be wrong, and the topology of the universe is both finite and relatively small—so that Earth does not have the distant duplicates that would be implied by an exponentially vast universe—then this Earth could be the only Earth that exists anywhere, a rather unnerving thought!

But it is dangerous to focus too much on specific hypotheses that you have no specific reason to think about.  This is the same root error of the Intelligent Design folk, who pick any random puzzle in modern genetics, and say, "See, God must have done it!"  Why 'God', rather than a zillion other possible explanations?—which you would have thought of long before you postulated divine intervention, if not for the fact that you secretly started out already knowing the answer you wanted to find.

You shouldn't even ask, "Might there only be one world?" but instead just go ahead and do physics, and raise that particular issue only if new evidence demands it.

Could there be some as-yet-unknown fundamental law, that gives the universe a privileged center, which happens to coincide with Earth—thus proving that Copernicus was wrong all along, and the Bible right?

Asking that particular question—rather than a zillion other questions in which the center of the universe is Proxima Centauri, or the universe turns out to have a favorite pizza topping and it is pepperoni—betrays your hidden agenda.  And though an unenlightened one might not realize it, giving the universe a privileged center that follows Earth around through space would be rather difficult to do with any mathematically simple fundamental law.

So too with asking whether there might be only one world.  It betrays a sentimental attachment to human intuitions already proven wrong.  The wheel of science turns, but it doesn't turn backward.

We have specific reasons to be highly suspicious of the notion of only one world.  The notion of "one world" exists on a higher level of organization, like the location of Earth in space; on the quantum level there are no firm boundaries (though brains that differ by entire neurons firing are certainly decoherent).  How would a fundamental physical law identify one high-level world?

Much worse, any physical scenario in which there was a single surviving world, so that any measurement had only a single outcome, would violate Special Relativity.

If the same laws are true at all levels—i.e., if many-worlds is correct—then when you measure one of a pair of entangled polarized photons, you end up in a world in which the photon is polarized, say, up-down, and alternate versions of you end up in worlds where the photon is polarized left-right.  From your perspective before doing the measurement, the probabilities are 50/50.  Light-years away, someone measures the other photon at a 20° angle to your own basis.  From their perspective, too, the probability of getting either immediate result is 50/50—they maintain an invariant state of generalized entanglement with your faraway location, no matter what you do.  But when the two of you meet, years later, your probability of meeting a friend who got the same result is 11.6%, rather than 50%.

If there is only one global world, then there is only a single outcome of any quantum measurement.  Either you measure the photon polarized up-down, or left-right, but not both.  Light-years away, someone else's probability of measuring the photon polarized similarly in a 20° rotated basis, actually changes from 50/50 to 11.6%.

You cannot possibly interpret this as a case of merely revealing properties that were already there; this is ruled out by Bell's Theorem.  There does not seem to be any possible consistent view of the universe in which both quantum measurements have a single outcome, and yet both measurements are predetermined, neither influencing the other.  Something has to actually change, faster than light.

And this would appear to be a fully general objection, not just to collapse theories, but to any possible theory that gives us one global world!  There is no consistent view in which measurements have single outcomes, but are locally determined (even locally randomly determined).  Some mysterious influence has to cross a spacelike gap.

This is not a trivial matter.  You cannot save yourself by waving your hands and saying, "the influence travels backward in time to the entangled photons' creation, then forward in time to the other photon, so it never actually crosses a spacelike gap".  (This view has been seriously put forth, which gives you some idea of the magnitude of the paradox implied by one global world!)  One measurement has to change the other, so which measurement happens first?  Is there a global space of simultaneity?  You can't have both measurements happen "first" because under Bell's Theorem, there's no way local information could account for observed results, etc.

Incidentally, this experiment has already been performed, and if there is a mysterious influence it would have to travel six million times as fast as light in the reference frame of the Swiss Alps.  Also, the mysterious influence has been experimentally shown not to care if the two photons are measured in reference frames which would cause each measurement to occur "before the other".

Special Relativity seems counterintuitive to us humans—like an arbitrary speed limit, which you could get around by going backward in time, and then forward again.  A law you could escape prosecution for violating, if you managed to hide your crime from the authorities.

But what Special Relativity really says is that human intuitions about space and time are simply wrong.  There is no global "now", there is no "before" or "after" across spacelike gaps.  The ability to visualize a single global world, even in principle, comes from not getting Special Relativity on a gut level.  Otherwise it would be obvious that physics proceeds locally with invariant states of distant entanglement, and the requisite information is simply not locally present to support a globally single world.

It might be that this seemingly impeccable logic is flawed—that my application of Bell's Theorem and relativity to rule out any single global world, contains some hidden assumption of which I am unaware -

- but consider the burden that a single-world theory must now shoulder!  There is absolutely no reason in the first place to suspect a global single world; this is just not what current physics says!  The global single world is an ancient human intuition that was disproved, like the idea of a universal absolute time.  The superposition principle is visible even in half-silvered mirrors; experiments are verifying the disproof at steadily larger levels of superposition—but above all there is no longer any reason to privilege the hypothesis of a global single world.  The ladder has been yanked out from underneath that human intuition.

There is no experimental evidence that the macroscopic world is single (we already know the microscopic world is superposed).  And the prospect necessarily either violates Special Relativity, or takes an even more miraculous-seeming leap and violates seemingly impeccable logic.  The latter, of course, being much more plausible in practice.  But it isn't really that plausible in an absolute sense.  Without experimental evidence, it is generally a bad sign to have to postulate arbitrary logical miracles.

As for quantum non-realism, it appears to me to be nothing more than a Get-Out-Of-Jail-Free card.  "It's okay to violate Special Relativity because none of this is real anyway!"  The equations cannot reasonably be hypothesized to deliver such excellent predictions for literally no reason.  Bell's Theorem rules out the obvious possibility that quantum theory represents imperfect knowledge of something locally deterministic.

Furthermore, macroscopic decoherence gives us a perfectly realistic understanding of what is going on, in which the equations deliver such good predictions because they mirror reality.  And so the idea that the quantum equations are just "meaningless", and therefore, it is okay to violate Special Relativity, so we can have one global world after all, is not necessary.  To me, quantum non-realism appears to be a huge bluff built around semantic stopsigns like "Meaningless!" 

It is not quite safe to say that the existence of multiple Earths is as well-established as any other truth of science.  The existence of quantum other worlds is not so well-established as the existence of trees, which most of us can personally observe.

Maybe there is something in that 20th decimal place, which aggregates to something bigger in macroscopic events.  Maybe there's a loophole in the seemingly iron logic which says that any single global world must violate Special Relativity, because the information to support a single global world is not locally available.  And maybe the Flying Spaghetti Monster is just messing with us, and the world we know is a lie.

So all we can say about the existence of multiple Earths, is that it is as rationally probable as e.g. the statement that spinning black holes do not violate conservation of angular momentum.  We have extremely fundamental reasons, having to do with the rotational symmetry of space, to suspect that conservation of angular momentum is built into the underlying nature of physics.  And we have no specific reason to suspect this particular violation of our old generalizations in a higher-energy regime.

But we haven't actually checked conservation of angular momentum for rotating black holes—so far as I know.  (And as I am talking here about rational guesses in states of partial knowledge, the point is exactly the same if the observation has been made and I do not know it yet.)  And black holes are a more massive regime.  So the obedience of black holes is not quite as assured as that my toilet conserves angular momentum while flushing, which come to think, I haven't checked either...

Yet if you make the mistake of thinking too hard about this one particular possibility, instead of zillions of other possibilities—and especially if you don't understand the fundamental reason why angular momentum is conserved—then it may start seeming more and more plausible that "spinning black holes violate conservation of angular momentum", as you think of more and more vaguely plausible-sounding reasons it could be true.

But the rational probability is pretty damned small.

Likewise the rational probability that there is only one Earth.

I mention this to explain my habit of talking as if many-worlds is an obvious fact.  Many-worlds is an obvious fact, if you have all your marbles lined up correctly (understand very basic quantum physics, know the formal probability theory of Occam's Razor, understand Special Relativity, etc.)  It is in fact considerably more obvious to me than the proposition that spinning black holes should obey conservation of angular momentum.

The only reason why many-worlds is not universally acknowledged as a direct prediction of physics which requires magic to violate, is that a contingent accident of our Earth's scientific history gave an entrenched academic position to a phlogiston-like theory which had an unobservable faster-than-light magical "collapse" devouring all other worlds.  And many academic physicists do not have a mathematical grasp of Occam's Razor, which is the usual method for ridding physics of invisible angels.  So when they encounter many-worlds and it conflicts with their (undermined) intuition that only one world exists, they say, "Oh, that's multiplying entities"—which is just flatly wrong as probability theory—and go on about their daily lives.

I am not in academia.  I am not constrained to bow and scrape to some senior physicist who hasn't grasped the obvious, but who will be reviewing my journal articles.  I need have no fear that I will be rejected for tenure on account of scaring my students with "science-fiction tales of other Earths".  If I can't speak plainly, who can?

So let me state then, very clearly, on behalf of any and all physicists out there who dare not say it themselves:  Many-worlds wins outright given our current state of evidence.  There is no more reason to postulate a single Earth, than there is to postulate that two colliding top quarks would decay in a way that violates conservation of energy.  It takes more than an unknown fundamental law; it takes magic.

The debate should already be over.  It should have been over fifty years ago.  The state of evidence is too lopsided to justify further argument.  There is no balance in this issue.  There is no rational controversy to teach.  The laws of probability theory are laws, not suggestions; there is no flexibility in the best guess given this evidence.  Our children will look back at the fact that we were STILL ARGUING about this in the early 21st-century, and correctly deduce that we were nuts.

We have embarrassed our Earth long enough by failing to see the obvious.  So for the honor of my Earth, I write as if the existence of many-worlds were an established fact, because it is.  The only question now is how long it will take for the people of this world to update.

 

Part of The Quantum Physics Sequence

Next post: "Living in Many Worlds"

Previous post: "If Many-Worlds Had Come First"

Comments (74)

Sort By: Old
Comment author: Roland2 11 May 2008 09:12:18AM 0 points [-]

Correction: Eliezer, you wrote "Jess Reidel" but correct is "Jess RIEDEL".

Comment author: mitchell_porter2 11 May 2008 11:40:22AM 2 points [-]

False for three reasons.

First: The Born probabilities. That is where all the predictive power of quantum theory is located. If you don't have those, you just have a qualitative world-picture, one of many possibilities.

Second: There is no continuity of identity in time of a world, as I suppose we shall see in the Julian Barbour instalment; nothing to relate the worlds extracted from the wavefunction in one moment to those extracted in the next, nothing to say 'this world is the continuation of that one'. The denial of continuity in time is a radical step and should be recognized as such.

Third: If you favor the position basis, then as things stand, you have to talk about instantaneous spacelike states of the whole universe, i.e. there is a conceptually (though not dynamically) special reference frame. You are free to say 'maybe we can do it differently, in a way that's more relativistic', but for now that's just a hope.

For all these reasons, many worlds is not obviously the leading contender.

Comment author: mitchell_porter2 11 May 2008 11:53:07AM 3 points [-]

I suppose the basic intuition here is, "Superposition is real for small things, we have no evidence that it breaks down for large things, and superposition means multiple instances of the thing superposed; therefore, many worlds, not just many electrons."

But is it clear that superposition means multiple instances of the thing superposed? Consider the temporal zigzag interpretations. There it is supposed that there is only one history between first and final observed event, and that the amplitudes are just the appropriate form of probabilities, not signs of multiple coexisting actualities. The temporal zigzag theorists cannot yet rigorously show that this is so; but the many worlds people cannot show that they get the right probabilities either. Therefore, even at the level of the individual quantum process, there is no evidence to favor the interpretation of superposition as denoting multiple actuality rather than multiple possibility.

Comment author: whowhowho 25 January 2013 05:07:48PM 0 points [-]

.> But is it clear that superposition means multiple instances of the thing superposed? Consider the temporal zigzag interpretations.

Consider also that supersposition is observer-relative.

Comment author: mitchell_porter2 11 May 2008 12:28:34PM 1 point [-]

Eliezer asked (of zigzag theories): "One measurement has to change the other, so which measurement happens first?"

It doesn't have to be that way. Events can be determined through a combination of local causality and global consistency; see the work on attempts to create time travel paradoxes using wormholes. For example, you may set things up so that a sphere, sent into one end of a wormhole, should emerge from the other in such a way as to collide with itself on the way in, thereby preventing its entry. It sounds like a grandfather paradox: what's the answer? The answer is that only nonparadoxical histories are even possible; such as those in which the sphere emerges and perturbs its prior course, but not by so much as to prevent its entry into the wormhole.

The harmony of distant outcomes in an EPR experiment may similarly be due to a global consistency.

Ideally, in order to apply the description-length version of Occam's razor to competing and wildly dissimilar theories, such as we have in these attempts to explain quantum mechanics, one would first take the rival theories, embed them in a common superfamily of possible theories, deploy some prior across that superfamily, and then condition on experimental results. However, neither many worlds nor temporal zigzag is even capable of reproducing experimental results, so long as they cannot derive the Born probabilities. There are two types of realist theories which are experimentally adequate: stochastic objective collapse theories (e.g. Ghirardi-Rimini-Weber), and deterministic nonlocal hidden-variable theories (e.g. Bohm). In theory, if we're trying to figure out our best current guess, we have to choose between those two! In practice, it seems obvious that theoretical pluralism is still called for, and that much more work needs to be done by the advocates of interpretations which remain qualitative but could become quantitative.

Comment author: Jason3 11 May 2008 01:20:02PM 1 point [-]

Have you considered nonlocal hidden variables (Bohm's version in particular)? The "pilot-wave" model does away with many worlds and the problems that you see many worlds addressing as far as I can tell.

Comment author: Recovering_irrationalist 11 May 2008 03:18:22PM 2 points [-]

Eliezer, continued compliments on your series. As a wise man once said, it's remarkable how clear explanations can become when an expert's trying to persuade you of something, instead of just explaining it. But are you sure you're giving appropriate attention to rationally stronger alternatives to MWI, rather than academically popular but daft ones?

Comment author: Günther_Greindl 11 May 2008 03:34:45PM 2 points [-]

Mitchell,

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if _everything_ exists the information content is zero - which fits nicely with ex nihilo nihil fit :-)

Structure and concreteness only emerges from the inside view, which gives the picture of a single world. Max Tegmark has paraphrased this idea nicely with the quip "many words or many worlds" (words standing for high information content).

Max's paper is quite illuminating: Tegmark, Max. 2007. The Mathematical Universe http://arxiv.org/abs/0704.0646

So we could say that there a good metaphysical reasons for preferring MWI to GRW or Bohm.

Comment author: naasking 13 May 2013 07:00:05PM *  0 points [-]

there is another argument speaking for many-worlds (indeed, even for all possible worlds - which raises new interesting questions of what is possible of course - certainly not everything that is imaginable): that to specify one universe with many random events requires lots of information, while if everything exists the information content is zero - which fits nicely with ex nihilo nihil fit

Now THAT's an interesting argument for MWI. It's not a final nail in the coffin for de Broglie-Bohm, but the naturalness of this property is certainly compelling.

Comment author: RobbBB 23 September 2013 03:42:55AM *  2 points [-]

Although Tegmark incidentally endorses MWI, Tegmark's MUH does not entail MWI. Yes, if there's a model of MWI, then some world follows MWI; but our world can be a part of a MUH ensemble without being in an MWI-bound region of the ensemble. We may be in a Bohmian portion of the ensemble.

Tegmark does seem to think MWI provides some evidence for MUH (which would mean that MUH predicts MWI over BM), but I think the evidence is negligible at best. The reasons to think MWI is true barely overlap at all with the reasons to think MUH is. In fact, the failure of Ockham to resolve BM v. MW could well provide evidence against MUH; if MWI (say) turned out to be substantially more complex (in a way that gives it fewer models) and yet true, that would give strong anthropic evidence against MUH. MUH is more plausible if we live in the kind of world that should predominate in the habitable zone of an ensemble.

Comment author: RobbBB 23 September 2013 03:31:49AM 2 points [-]

But MWI is not the doctrine 'everything exists'. This is a change of topic. Yes, if we live in a Tegmark universe and MWI is the simplest theory, then it's likely we live in one of the MWI-following parts of the universe. But if we don't live in a Tegmark universe and MWI is the simplest theory, then it's still likely we live in one of the MWI-following possible worlds. It seems to me that all the work is being done by Ockham, not by Tegmark.

Comment author: EHeller 23 September 2013 03:41:50AM 0 points [-]

Max Tegmark has paraphrased this idea nicely with the quip "many words or many worlds"

Sure, but why is the information content of the current state of the universe something that we would want to minimize? In both many-worlds and alternatives, the complexity of the ALGORITHM is roughly the same.

Comment author: billswift 11 May 2008 03:50:37PM 4 points [-]

"If the same laws are true at all levels - i.e., if many-worlds is correct - then when you measure one of a pair of entangled polarized photons, you end up in a world in which the photon is polarized, say, up-down, and alternate versions of you end up in worlds where the photon is polarized left-right. From your perspective before doing the measurement, the probabilities are 50/50. Light-years away, someone measures the other photon at a 20째 angle to your own basis. From their perspective, too, the probability of getting either immediate result is 50/50 - they maintain an invariant state of generalized entanglement with your faraway location, no matter what you do. But when the two of you meet, years later, your probability of meeting a friend who got the same result is 11.6%, rather than 50%.

"If there is only one global world, then there is only a single outcome of any quantum measurement. Either you measure the photon polarized up-down, or left-right, but not both. Light-years away, someone else's probability of measuring the photon polarized similarly in a 20째 rotated basis, actually changes from 50/50 to 11.6%."

I don't see how you claim many-worlds gets you around the special relativity problem, the measurements can only be compared within one world - how would postulating other non-interacting (after the split) worlds help?

Also I have been having trouble following your posts. Your writing here has the same problem many weirdos (IDers, perpetual-motion-machine makers, etc) has. Any facts and arguments are getting lost in your wordiness. You might want to try to post brief explanations of what **specifically** your claims are in each post (maybe as occsasional summing-up posts).

Comment author: athmwiji 11 May 2008 04:17:59PM 0 points [-]

Brains, as far as we currently understand them, are not digital. For a neuron fire / not fire is digital, but there is a lot of information involved in determining weather or not a neuron fires. A leaky integrator is a reasonable rough approximation to a neuron and is continuous.

Comment author: Richard_Hollerith2 11 May 2008 04:33:42PM 0 points [-]

Your writing here has the same problem many weirdos . . . has. Any facts and arguments are getting lost in your wordiness.

Unfair. Eliezer has been trying to keep the series accessible to nonspecialists, and of course that means that the specialists are going to wade through more words than they would have preferred to wade through. Boo hoo.

Comment author: Eliezer_Yudkowsky 11 May 2008 04:35:47PM 2 points [-]

Brains, as far as we currently understand them, are not digital. For a neuron fire / not fire is digital, but there is a lot of information involved in determining weather or not a neuron fires. A leaky integrator is a reasonable rough approximation to a neuron and is continuous.

The point is that by the time two brains differ by a whole neuron firing, they are decoherent - far too many particles in different positions. That's why you can't feel the subtle influence of someone trying to think a little differently from you - by the time a single neuron fires differently, the influence has diminished down to an exponentially tiny infinitesimal. Even a single neurotransmitter in a different place prevents two configurations from being identical.

@Billswift: The point is that nothing happens differently as a result of distant events - no local evolution, no probabilistic chance, no experience, no "non-signaling influence", nothing changes - until the two parties meet, slower than light. You can (I think) split it up and view it in terms of strictly local events with invariant states of distant entanglement.

@Recovering irrationalist: I haven't encountered any stronger arguments for the untestable SR-violating single-world theory. Sure, no one knows what science doesn't know. But given that I believe single-worlds is false, I should not expect to encounter unknown strong arguments for it. Do you have a particular stronger argument in mind?

@Jason: Bohm's particles are epiphenomena. The pilot-wave must be real to guide the particles; the particles themselves have no effect. If the pilot-wave is real, the amplitude distribution we know is real, and it will have conscious observers in it if it performs computations, etc. And there is simply no reason to suppose it.

@Mitchell: Of Born I have already extensively spoken (your 1), and postulating a single world doesn't help you at all; it is strictly simpler to say "The Born probabilities exist" than to say "The Born probabilities exist and control a magical FTL collapse" or "The Born probabilities exist and pilot epiphenomenal points [also FTL]." On your 2, it so happens that I don't deny causal continuity, and plan to speak of this later. And regarding (3) quantum physics describes a covariant, local process so it seems like a good guess that there exists a covariant, local representation; but regardless the essence of Special Relativity is in the covariance and locality, whether we can find a representation that reveals it, or not.

Comment author: aaronsw 04 August 2012 10:23:34AM *  2 points [-]

"it will have conscious observers in it if it performs computations"

So your argument against Bohm depends on information functionalism?

Comment author: Dynamically_Linked 11 May 2008 05:09:28PM 1 point [-]

Robin Hanson suggests that if exponentially tinier-than-average decoherent blobs of amplitude ("worlds") are interfered with by exponentially tiny leakages from larger blobs, we will get the Born probabilities back out.

Shouldn't it be possible for a tinier-than-average decoherent blobs of amplitude to deliberately become less vulnerable to interference from leakages from larger blobs, by evolving itself to an isolated location in configuration space (i.e., a point in configuration space with no larger blobs nearby)? For example, it seems that we should be able to test the mangled worlds idea by doing the following experiment:

1. Set up a biased quantum coin, so that there is a 1/4 Born probability of getting an outcome of 0, and 3/4 of getting 1. 2. After observing each outcome of the quantum coin toss, broadcast the outcome to a large number of secure storage facilities. Don't start the next toss until all of these facilities have confirmed that they've received and stored the previous outcome. 3. Repeat 100 times.

Now consider a "world" that has observed an almost equal number of 0s and 1s at the end, in violation of Born's rule. I don't see how it can get mangled. (What larger blob will be able to interfere with it?) So if mangled worlds is right, then we should expect a violation of Born's rule in this experiment. Since I doubt that will be the case, I don't think mangled worlds can be right.

Comment author: Goplat 11 May 2008 05:16:54PM 0 points [-]

So the Bohm interpretation takes the same amplitude distribution as many-worlds and builds something on top of that. So what? That amplitude distribution is just a mathematical object, but it having a physical existence certainly doesn't change the truth or falsehood of any mathematical statements, so I could just as easily say that the amplitude distribution itself is an "epiphenomenon" (and therefore can't exist).

Comment author: RobinHanson 11 May 2008 05:41:53PM 0 points [-]

Dynamically, "secure storage facilities" are not at all secure against world mangling. Perhaps quantum error correction could do better.

Comment author: Dynamically_Linked 11 May 2008 06:14:27PM 0 points [-]

Robin, can you offer some intuitive explanation as to why defense against world mangling would be difficult? From what I understand, a larger blob of amplitude (world) can mangle a smaller blob of amplitude only if they are close together in configuration space. Is that incorrect? If those "secure storage facilities" simply write the quantum coin toss outcomes in big letters on some blackboards, which worlds will be close enough to be able to mangle the worlds that violate Born's rule?

Comment author: Eliezer_Yudkowsky 11 May 2008 08:50:35PM 0 points [-]

Dynamically, I think the problem is that for everything you try that would render your world "distant" in the configuration space, it naturally tends to make your world smaller and more vulnerable, too. The worlds mangling yours aren't close, it's just that, collectively, they're so much larger than yours, that even very tiny stray amplitude flows from them can mangle you.

@Goplat: In Bohm's theory, the amplitude distribution has to be real because it affects the course of the particles. But the amplitude distribution itself is not affected by the particles. So any people encoded in the amplitude distribution - which can certainly compute things - would have no way of knowing the particles existed.

Comment author: drnickbone 27 June 2012 08:29:10PM *  2 points [-]

Rather a late comment... but this response to Goplat reminds me of one of David Lewis's arguments for modal realism. Namely, he argues that "merely possible" people have exactly the same evidence that they are "real" as we do (it all looks real to them), and hence we ourselves have no evidence that we are "real" rather than merely possible.

An objection to this is "No! Merely possible people DON'T have evidence that they are real, because they don't exist. They don't have any evidence at all. They WOULD have the same evidence that we do if they DID exist, but then of course they WOULD be real."

A similar objection is that the wave function amplitudes can't do any real computation (as opposed to possible computation) unless they have real particles to compute with. So any people who find themselves existing can infer (correctly) that they are made out of real particles and not mere amplitudes.

It always amuses me that the particle motions in Bohm's theory are described as "hidden variables". Rather to the contrary, they are the ONLY things in the theory which are NOT hidden (whereas the wave function pushing the particles around is...)

Comment author: Jason3 11 May 2008 09:16:31PM 0 points [-]

"it will have conscious observers in it if it performs computations" I'm at a loss for what this means.

"In Bohm's theory, the amplitude distribution has to be real because it affects the course of the particles. But the amplitude distribution itself is not affected by the particles. So any people encoded in the amplitude distribution - which can certainly compute things - would have no way of knowing the particles existed." How is not being able to know where the particular particles are in a particular amplitude distribution an argument against it?

Comment author: Robin_Z 11 May 2008 09:31:12PM 1 point [-]

Oh, that's subtle.

Check me if I'm wrong: according to the MWI, the evolving waveform itself can include instantiations of human beings, just as an evolving Conway's Life grid can include gliders. Thus, if we're proposing that humans exist (a reasonable hypothesis), they exist in the waveform, and if the Bohmian particles do not influence the evolution of the waveform, they exist in the waveform the same way whether or not Bohm's particles are there. And, in fact, if they do not influence the amplitude distribution, they're epiphenomenal in the same sense that people like Chalmers claim consciousness is.

If the particles do influence the evolution of the amplitude distribution, everything changes (of course). But that remains to be shown.

Comment author: Dynamically_Linked 11 May 2008 10:27:09PM 0 points [-]

Eliezer, I think your (and Robin's) intuition is off here. Configuration space is so vast, it should be pretty easy for a small blob of amplitude to find a hiding place that is safe from random stray flows from larger blobs of amplitude.

Consider a small blob in my proposed experiment where the number of 0s and 1s are roughly equal. Writing the outcomes on blackboards does not reduce the integrated squared modulus of this blob, but does move it further into "virgin territory", away from any other existing blobs. In order for it to be mangled by stray flows from larger blobs, those stray flows would somehow have to reach the same neighborhood as the small blob. But how? Remember that in this neighborhood of configuration space, the blackboards have a roughly equal number of 0s and 1s. What is the mechanism that can allow a stray piece of a larger blob to reach this neighborhood and mangle the smaller blob? It can't be random quantum fluctuations, because the Born probability of the same sequence of 0s and 1s spontaneously appearing on multiple blackboards is much less than the integrated squared modulus of the small blob. To put it another way, by the time a stray flow from a larger blob reaches the small blob, its amplitude would be spread much too thin to mangle the small blob.

Comment author: Wiseman 11 May 2008 10:29:02PM 0 points [-]

Question: how does MWI not violate SR/no-faster-than-light-travel itself?

That is, if a decoherence happens with a particle/amplitude, requiring at that point a split universe in order to process everything so both possibilities actually happen, how do all particles across the entire universe know that at that point they must duplicate/superposition/whatever, in order to maintain the entegrity of two worlds where both posibilities happen?

Comment author: Recovering_irrationalist 12 May 2008 12:04:51AM 2 points [-]

Eliezer: But given that I believe single-worlds is false, I should not expect to encounter unknown strong arguments for it.

Indeed. And in light of your QM explanation, which to me sounds perfectly logical, it seems obvious and normal that many worlds is overwhelmingly likely. It just seems almost too good to be true that I now get what plenty of genius quantum physicists still can't.

The mental models/neural categories we form strongly influence our beliefs. The ones that now dominate my thinking about QM are learned from one who believes overwhelmingly in MWI. The commenters who already had non-MWI-supporting mental representations that made sense to them seem less convinced by your arguments.

Sure I can explain all that away, and I still think you're right, I'm just suspicious of myself for believing the first believable explanation I met.

Comment author: Patrick_(orthonormal) 12 May 2008 02:32:38AM 1 point [-]

Well, now I think I understand why you chose to do the QM series on OB. As it stands, the series is a long explication of one of the most subtle anthropocentric biases out there— the bias in favor of a single world with a single past and future, based on our subjective perception of a single continuous conscious experience. It takes a great deal of effort before most of us are even willing to recognize that assumption as potentially problematic.

Oh, and one doesn't even have to assume the MWI is true to note this; the single-world bias is irrationally strong in us even if it turns out to correspond to reality.

Comment author: mitchell_porter2 12 May 2008 05:25:41AM 1 point [-]

GĂźnther, I am aware of that argument, but it has very little to do with favoring many worlds in the sense of Everett. See Tegmark's distinction between Level III and Level IV. The worlds of an Everett multiverse are supposed to be connected facets of a single entity, not disjoint Level IV entities.

This allows me to highlight another aspect of many worlds, which is the thorough confusion regarding causality. What are the basic cause-and-effect relationships, according to many worlds? What are the entities that enter into them? Do worlds have causal power, or are they purely epiphenomenal? Remember, that-which-exists at any moment does not just consist of a set of worlds, but a set of worlds each with a complex number attached. And that-which-exists in the next moment is - the same set of worlds, but now with different complex numbers attached. The more I think about it, the less sense it makes, but people have been seduced by the simple-sounding rhetoric of worlds splitting and recombining.

To say it all again: what are we being offered by this account? On the one hand, a qualitative picture: the reality we see is just one sheet in a sheaf of worlds, which split and merge as they become dissimilar and then become similar again. On the other hand, a quantitative promise: the picture isn't quite complete, but we hope to get the exact probabilities back somehow.

Now what is the reality of quantum mechanics, applied to the whole universe? (If we adopt the configuration-centric approach.) There is a space of classical-looking "configurations" - each an arrangement of particles in space, or a frozen sea of waves in fundamental fields. Then, there is a complex number, a "probability amplitude", associated with each configuration. Finally, we have an equation, the SchrĂśdinger equation, which describes how the complex numbers change with time. That's it.

If we just look at the configuration space, and ignore the complex numbers, there is no splitting and merging, nothing changes. We have a set of instantaneous world-states, just sitting there.

If we try to bring the complex numbers into the picture, there are two obvious options. One is to identify a world with a particular static configuration. Then nothing ever actually moves in any world, all that changes is the mysterious complex number globally associated with it. That's one way to break down a universal wavefunction into "many worlds", but it seems meaningless.

The other way is to break down the wavefunction at any moment in that fashion, but to deny any relationship between the worlds of one moment and the world of the next, as I described it up in my second paragraph. So once again, reality ends up consisting of a set of static spatial configurations, each with a complex number magically attached, but there is no continuity of existence.

There is actually a third option, however - an alternative way to assert continuity of existence, between the worlds of one moment and the world of the next. Basically, you go against the gradient in configuration space of the angles of the complex numbers, in order to decide which world-moments later continue the world-moments of now. That defines a family of nonintersecting trajectories, each of which resembles a perturbed version of a classical history. In fact, we've just reinvented a form of Bohmian mechanics.

But enough. I hope someone grasps that we have simply not been given a picture which can sustain the rhetoric of worlds splitting. Either the worlds sit there unchanging, or they only exist for a moment, or they are self-sufficient Bohmian worlds which neither split nor join. If I try to understand mangled worlds in this way, it seems to say, "ignore those configurations where the amplitude is very small". But they're either there or not; and if they are literally not, we're no longer using the SchrĂśdinger equation.

Comment author: Eliezer_Yudkowsky 12 May 2008 06:38:32AM 0 points [-]

Mitchell, you already know about Barbour, so why are you asking this?

Comment author: constant3 12 May 2008 06:50:05AM 0 points [-]

Remember, that-which-exists at any moment does not just consist of a set of worlds, but a set of worlds each with a complex number attached. And that-which-exists in the next moment is - the same set of worlds, but now with different complex numbers attached.

You seem to be talking about the wavefunction, which is a complex function defined over the configuration space (a set of configurations each with a complex number attached). But in that case you seem to be confusing a world with a configuration. A configuration defines only position. (Assuming we're talking about positional configuration space.)

It seems I can save myself some trouble explaining by quoting Eliezer:

A point mass of amplitude, concentrated into a single exact position in configuration space, does not correspond to a precisely known state of the universe. It is physical nonsense.

It's like asking, in Conway's Game of Life: "What is the future state of this one cell, regardless of the cells around it?" The immediate future of the cell depends on its immediate neighbors; its distant future may depend on distant neighbors.

If Conway's Game of Life managed to support a multiverse, then a single universe in this multiverse would not correspond to a cell. It would correspond to some section of the whole pattern quite a bit larger than a single cell - a section which was for the most part causally separated from the rest of the pattern. And this section might move around over Conway's gameboard (or whatever it's called), just as a glider can move across Conway's gameboard.

Comment author: mitchell_porter2 12 May 2008 07:00:47AM 1 point [-]

So far, we're still implicitly in a framework where there's time evolution, so I have described ways of implementing the many worlds vision in that framework. I am a little hesitant to preempt your next step (after all, I don't know what idiosyncratic spin you may put on things), but nonetheless: Suppose we adopt the "timeless" perspective. The wavefunction of the universe is a standing wave in configuration space; it does not undergo time evolution. My first option means nothing, because now we just have a static association of amplitudes with configurations. The second option is Barbour's - disconnected "time capsules" - only now there isn't even a question of linking up the time capsules of one moment with the time capsules of the next, because there's only one timeless moment throughout configuration space. I don't know if the third option is still viable or not; you can still compute the phase gradients of the standing wave according to the Bohmian law of motion, but I don't know about the properties of the resulting trajectories.

There may be a problem for mangled worlds peculiar to Barbour's model; there are no dynamics, therefore no mangling in any dynamical sense. You will have to come up with a nondynamical notion of decoherence too.

Comment author: mitchell_porter2 12 May 2008 07:10:09AM 1 point [-]

(Previous comment was in response to Eliezer's 02:38 AM.)

constant, part of my objective is to highlight the vagueness of the concept of "world" as used by many-worlds advocates, and the problems peculiar to the various ways of making it exact, having previously argued that leaving it vague is not an option. I have certainly seen many-worlds people talk as if worlds were "wave packets" or other extended substructures within the total wavefunction. But I await a precise statement of what that means.

Comment author: constant3 12 May 2008 07:42:40AM 0 points [-]

mitchell,

I think Eliezer recognizes the the vagueness of "world" but sees it as a problem for single-worlders. This is what he seems to be saying here:

We have specific reasons to be highly suspicious of the notion of only one world. The notion of "one world" exists on a higher level of organization, like the location of Earth in space; on the quantum level there are no firm boundaries (though brains that differ by entire neurons firing are certainly decoherent). How would a fundamental physical law identify one high-level world?

Comment author: Eliezer_Yudkowsky 12 May 2008 07:56:13AM 2 points [-]

What flows is not time, but causality. As you guessed, I shall expand on that later. I think Barbour's time capsules reflect his lack of cog-sci-phil background - a static disk drive should never contain any observers; something has to be processed. You cannot identify observer-moments with individual configurations, which seems to be what Barbour is trying to do.

From the perspective outside time, nothing changes, but things are nonetheless determined by their causal ancestors. This is what makes the notion of "local causality" or Markov neighborhoods meaningful. This flow of determination is what supports computation, which is what supports the existence of observers. This means that no observer is ever embedded in a single configuration; only a determination of future configurations' amplitude by past configurations' amplitude, can support computation and consciousness.

Which I consider as common sense. Timelessness, also, adds up to normality; there's still a future, there's still a past, and there's still a causal relation between throwing a rock and breaking a window. None of that goes away when you take a standpoint outside time.

Comment author: mitchell_porter2 12 May 2008 08:22:02AM 1 point [-]

constant - well, then, it is shaping up as follows: We need some concept of world. We can try to be exact about it, and run into various problems, as I have suggested above. Or we can be determinedly vague about it - e.g. saying that a world is a roughly decoherent blob of amplitude - and run into other problems. And then on top of this we can't even recuperate the quantitative side of quantum mechanics.

There is a form of many-worlds that gives you the correct probabilities back. It's called consistent histories or decoherent histories. But it has two defining features. First of all, the histories in question are "coarse-grained". For example, if your basic theory was a field theory, in one of these consistent histories, you don't specify a value for every field at every space-time point, just a scattering of them. Second, each consistent history has a global probability associated with it - not a probability amplitude, just an ordinary probability. Within this framework, if you want to calculate a transition probability - the odds of B given A - first you consider only those histories in which A occurs, and then you compute Pr(B|A) by using those apriori global probabilities.

Those global probabilities don't come from nowhere. The basic mathematical entity in consistent histories is an object called the decoherence functional (which can be derived from a familiar-to-physicists postulate like an action or a Hamiltonian), which takes as its input two of these coarse-grained histories. The decoherence functional defines a consistency condition for the coarse-grained histories; a set of them is "consistent" if they are all pairwise decoherent according to the decoherence functional. You then get that apriori global probability for the individual history by using it for both inputs (in effect, calculating its self-decoherence, though I don't see what that could mean). The whole thing is reminiscent of a diagonalized density matrix, and if I understood it better I'm sure I could make more of that similarity.

Anyway, technical details aside, the important point is that there is a form of many-worlds thinking in which we do get the Born probabilities back, by conditioning on a universal prior computed from the decoherence functional. If we try this out as a picture of reality, we now have to make sense of the probabilities associated with the histories. Two possibilities suggest themselves to me (I will neglect subjectivist interpretations of those probabilities): (a) there's only one world, and a world-probability is the primordial probability that that was to be the world which became actual; (b) all the worlds exist, in multiple copies, and the probabilities describe their relative multiplicities. They're both a little odd, but I think either is preferable to the whole "dare to be vague" line of argument.

Comment author: mitchell_porter2 12 May 2008 09:32:06AM 1 point [-]

Eliezer: would you agree with the following, as a paraphrase of the physical ontology you propose?

Quantum theory is just field theory in the infinite-dimensional space formerly known as configuration space. What we thought were "locations in space" are actually directions in configuration space. If I see a thing at a place, it actually means there's a peak in the ψ-field in a certain region of configuration space, a region which somehow corresponds to my seeing of the thing just as much as it corresponds to the thing itself being in that state. And if the peak splits into two, there are now two of me.

I think I get it finally. Not that I believe it now. But expressed that way, I can put it into communication with the other interpretations, as part of the one spectrum of theoretical possibilities. I still strongly doubt that, after you employ a Kolmogorovian razor, theories with branching worlds will be favored over theories without. And I still advance the vagueness objection; but there are extra directions in which this idea might be taken. For example, though the boundaries of a wave are vague, the existence of a peak is not. So a quest for ontologically sharp entities, as the ostensible correlates of 'world' and 'mind', could focus on topological structures in the ψ-field, like inflection points, rather than geometric ones like blobs. Indeed, the whole description in terms of a smoothly varying ψ-field might be dual to a discrete combinatorial one; there are many such correspondences in algebraic geometry.

Comment author: Silas 12 May 2008 02:44:23PM 1 point [-]

So, decoherence, which implies Many Worlds, is the superior scientific theory because it makes the same predictions with strictly fewer postulates, and academic physicists only believe otherwise because of deeply ingrained biases.

There, that didn't take 4,000 words, now, did it?

j/k,j/k, you're good, you're good ;-)

(Don't ban me)

Comment author: Dustin2 12 May 2008 06:23:37PM 2 points [-]

So, decoherence, which implies Many Worlds, is the superior scientific theory because it makes the same predictions with strictly fewer postulates

No. Decoherence as an interpretation is not a scientific theory, it is an ontology. Decoherence as an interpretation does not imply Many Worlds unless the wavefunction is considered to be metaphysically real. That ascription of reality to the wavefunction is not a scientific postulate, it is a metaphysical one. Many worlds does not predict anything -- quantum theory makes the predictions, Many Worlds is an ontology, a reification of that theory.

In any case, my last question was ignored, and I don't suspect that further questions about considering things in a less realistic light will be taken seriously because of the glib dismissal and flippant mischaracterization Eli has given the very serious objections from instrumentalists. But I'm going to throw out another paper on the relational interpretation in the hopes that someone here will take seriously the idea that all of this confusion over which interpretation is the right one comes from an unreasonable committment to bad metaphysics.

Comment author: Eliezer_Yudkowsky 13 May 2008 01:15:36AM 1 point [-]

Dustin said: "Decoherence as an interpretation does not imply Many Worlds unless the wavefunction is considered to be metaphysically real."

Dustin's referenced paper said:

The relational approach claims that a number of confusing puzzles raised by Quantum Mechanics (QM) result from the unjustified use of the notion of objective, absolute, ‘state’ of a physical system, or from the notion of absolute, real, ‘event’.

The way out from the confusion suggested by RQM consists in acknowledging that different observers can give different accounts of the actuality of the same physical property [6]. This fact implies that the occurrence of an event is not something absolutely real or not, but it is only real in relation to a specific observer. Notice that, in this context, an observer can be any physical system.

Thus, the central idea of RQM is to apply Bohr and Heisenberg’s key intuition that “no phenomenon is a phenomenon until it is an observed phenomenon” to each observer independently. This description of physical reality, though fundamentally fragmented, is assumed in RQM to be the best possible one, i.e. to be complete

The final step in the proof is left as an exercise to the reader.

Comment author: mitchell_porter2 13 May 2008 01:28:10AM 1 point [-]

A further implication of "quantum theory as field theory of configuration space": It means that "spatial configurations" are merely coordinates, labels; and labels are merely conventions. All that really exists in this interpretation are currents in a homogeneous infinite-dimensional space. When such a current passes through a point notionally associated with the existence of a particular brain state, there's no picture of a brain attached anywhere. This means that the currents and their intrinsic relations bear all the explanatory burden formerly borne by spatial configurations in classical physics.

Dustin, what question are you talking about? Question to whom? The only comments I see from you are addressed to Caledonian, in the previous post in this series.

I am afraid that I find the relational interpretation to be gibberish. "The character of each quantum event is only relative to the system involved in the interaction." Can we apply this to Schrรถdinger's cat? "The cat is only dead relative to its being seen to be dead", perhaps? The cat is dead, alive, neither, or both. It is not "relative".

Comment author: whowhowho 25 January 2013 05:25:18PM 0 points [-]

No, you can't fet inconsistent interpretations:-

"This relativisation of actuality is viable thanks to a remarkable property of the formalism of quantum mechanics. John von Neumann was the first to notice that the formalism of the theory treats the measured system (S ) and the measuring system (O) differently, but the theory is surprisingly flexible on the choice of where to put the boundary between the two. Different choices give different accounts of the state of the world (for instance, the collapse of the wave function happens at different times); but this does not affect the predictions on the final observations. Von Neumann only described a rather special situation, but this flexibility reflects a general structural property of quantum theory, which guarantees the consistency among all the distinct "accounts of the world" of the different observing systems. The manner in which this consistency is realized, however, is subtle."--SEP

Comment author: Thanatos_Savehn 13 May 2008 07:23:39AM 1 point [-]

It's good to know that somewhere I won the World Series of Poker last year; and the idiot that went all in over my 3x raise with 7-2 off suit and sucked out to beat my AA with is poor and broke somewhere today and that's good to know too. Not that I'm bitter or anything, of course. Not in those other worlds anyway.

Comment author: Eliezer_Yudkowsky 13 May 2008 07:46:11AM 4 points [-]

Live in your own world.

Comment author: Günther_Greindl 14 May 2008 02:43:15PM 1 point [-]

Mitchell,

your concerns concerning vagueness of the world concept is addressed here:

Everett and Structure (David Wallace) http://arxiv.org/abs/quant-ph/0107144v2

Also, the ontology proposed here fits very nicely with the currently most promising streak of Scientific Realism (also referred to in the Wallace paper) -in it's ontic variant.

http://plato.stanford.edu/entries/structural-realism/

Cheers, G端nther

Comment author: mitchell_porter2 17 May 2008 06:57:57AM 1 point [-]

G端nther, I have previously argued that vagueness is not an option for "mind" and "world", even if it is for "baldness" or "heap of sand" or "table". The existence of some sort of a world, with you in it, and the existence of a mind aware of this, are epistemic fundamentals. Try to go vague on those and you are in effect saying there's some question as to whether anything at all exists, or that that is just a matter of definition. Your mind in your world is the medium of your awareness of everything. You are somewhat free to speculate as to the nature of mind and world, but you are not free to say that there's no fact of the matter at all.

This whole situation exists because of the particular natural-scientific models we have. But rather than treat the nonvagueness of mind and world as an extra datum to be used in theoretical construction, instead we get apologetics for the current models, explaining how we can do without exactness in this regard. It's all rationalization, if you ask me.

Comment author: Dihymo 01 June 2008 08:16:41PM 1 point [-]

Live in your own world. Sure except when I need the MWI Spaghetti Monster to get the opposite of my result.

Collapse/MWI are the new wave/particle duality. The metaphysical cube fell over and rotated 90 degrees. Collapse/MWI only looks different because the cube looks unchanged.

A superposition doesn't imply that the simpler component waveforms exist. It can also mean you drove the speakers to eleven, reached the limit the fabric of spacetime could handle, and are receiving distortion.

Comment author: Dave4 27 June 2008 10:18:27AM -2 points [-]

Many worlds is far from obviously true. The only logical stand point is single universe, there's no evidence against it or even suggesting ANYTHING else.

Bohm is probably the correct one, and has been since 1926, before even Copenhagen was made up.

If your such a MWI believer, realize it's self refuting faith. In MWI all the atoms making up your brain would be in many universes made to believe it was right while it was wrong.

Comment author: Z._M._Davis 27 June 2008 12:23:47PM 1 point [-]

"realize it's self refuting faith. [...] all the atoms making up your brain would be [...] made to believe it was right while it was wrong."

That's not an argument against the MWI; that's an argument against physics.

Comment author: Dave4 28 June 2008 07:02:03AM -1 points [-]

Only if Many worlds is assumed true, yeah, cause then EVERY possibility is true. Like right now in this universe you read this post. In another you have intercourse with your neighbours dog. In another your hair just fell off. EVERY physical possibility being true = not science = cop out = end of science.

Anyway, MWI is inconsistent with all forms of realism so it's a incoherent hypothesis.

Comment author: Dave4 28 June 2008 07:04:36AM 0 points [-]

Please save your breathe, don't even try to say "NONO Many worlds is the REALIST" approach to QM. That's bohm, he came 3 years before Everett, he saved realism in QM. Actually no, de Broglie did in the early 1920's.

Read Travis Norsen's article in Foundation of Physics: "Against realism". It'll show you just HOW deluded MW proponents claim they are.

You can find it on arxivs I think

Comment author: Z._M._Davis 28 June 2008 07:14:34AM 0 points [-]

Note the ellipses, Dave.

Comment author: mlionson 17 February 2010 02:19:23AM -2 points [-]

There is no serious quantum physicist who would deny that it is possible to prepare a superposition of states in which a needle penetrates the skin to obtain a blood sugar measurement or does not. This situation could be created, perhaps by briefly freezing a small component of blood and skin on a live person. When this situation predictably resolves into a situation in which the measuring apparatus reads out the result of a blood sugar measurement, though the needle is seen to never penetrate the skin, where was the measurement made?

Where was the bloody needle? Where was the measuring apparatus on which the measurement was made. Where was the arm from which the blood was taken!

Those who do not understand the existence of the multiverse need to provide answers to these simple questions. If the arm is not real in a different universe in which the needle actually went in, how was blood drawn from it and a result reported?

If someone seriously doubts that this scenario can and will be created in the future, which law of physics says that we cannot create this superposition? Which law of physics do you plan to change, to prevent this result, though it has not failed any experiment?

Remarkably, even most of those who deny the existence of the multiverse do not deny that such a blood sugar result could be obtained. This means that virtually all physicists, including those who support Bohm, transactional perspective, Copenhagen, etc., agree that we will be able to obtain a blood sugar result from a needle that never penetrated the arm.

To them I ask again. Where is the arm from which the blood was drawn? Is your hypothesis really that it was drawn in the world of possibility? If so then the map that you call the world of possibility has every component of a real world, including the blood! When the map is as detailed in every respect as the territory, it is the territory. Right?

Comment author: wnoise 17 February 2010 05:48:38AM 4 points [-]

There is no serious quantum physicist who would deny that it is possible to prepare a superposition of states in which a needle penetrates the skin to obtain a blood sugar measurement or does not.

True, but many will say it is impossible for all practical purposes.

When this situation predictably resolves into a situation in which the measuring apparatus reads out the result of a blood sugar measurement, though the needle is seen to never penetrate the skin, where was the measurement made?

The situation resolves into either: 1. The measuring apparatus pierces the skin, has a bloody needle, and reports the result. 2. The measuring apparatus does not pierce the skin, does not have a bloody needle, and does not report the result.

Histories only interfere when they come to the same end result. That doesn't happen in this case.

Comment author: mlionson 17 February 2010 06:30:43AM -2 points [-]

"True, but many will say it is impossible for all practical purposes."

So the truth of the science is determined by the costs of doing the experiment? By the way, experimentalists are getting far better at creating larger and larger superpositions in making quantum computers, and quantum unitary evolution of the state vector has never been shown to be violated. There is never a time when what could have happened can not effect what does happen.

"The situation resolves into either: 1. The measuring apparatus pierces the skin, has a bloody needle, and reports the result. 2. The measuring apparatus does not pierce the skin, does not have a bloody needle, and does not report the result"

That is just not true according to known laws of physics. The blood sugar measuring apparatus can also be in a superposition of blood being analyzed and blood not being analyzed, along with the superposition of the needle. So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.

Copenhagen people call this type of result a "counterfactual". The fact that something could have happened (the needle going in) changes what does happen (the blood sugar result is measured). Except, the whole counterfactual argument becomes nonsensical when one is talking about blood sugar recordings in needles that never penetrate the skin.

This is precisely the type of situation that David Deutsch writes about when he says the following:

"To predict that future quantum computers, made to a given specification, will work in the ways I have described, one need only solve a few uncontroversial equations. But to explain exactly how they will work, some form of multiple-universe language is unavoidable. Thus quantum computers provide irresistible evidence that the Multiverse is real. One especially convincing argument is provided by quantum algorithms ... which calculate more intermediate results in the course of a single computation than there are atoms in the visible universe. When a quantum computer delivers the output of such a computation, we shall know that those intermediate results must have been computed somewhere, because they were needed to produce the right answer. So I issue this challenge to those who still cling to a single-universe worldview: if the universe we see around us is all there is, where are quantum computations performed? I have yet to receive a plausible reply."

Blood sugar results from needles and measuring devices that were in superposition and results of calculations from qubits in superposition are precisely the outcomes we can expect in the future from utilizing the known laws of physics to our advantage.

Where are the calculations performed? Where is the bloody arm?

Those who do not accept the reality of the multiverse really do have to answer these simple questions, yet invariably they cannot.

Comment author: wnoise 17 February 2010 06:50:45AM *  3 points [-]

There is never a time when what could have happened can not effect what does happen.

You are badly confused. When you describe things as being in superposition, then only what happened (the entire superposition) effects what does happen (in the entire superposition). If you take some sort of "coherent histories" view, then, again, all coherent histories can equally well have been said to happen.

The blood sugar measuring apparatus can also be in a superposition of blood being analyzed and blood not being analyzed, along with the superposition of the needle.

Correct.

So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.

No. We get a superposition of the result being recorded, and the result not being recorded.

Those who do not accept the reality of the multiverse

I do accept the reality of the multiverse. But I know how to use quantum mechanics to make predictions, and I get different ones than you do.

Comment author: Mitchell_Porter 17 February 2010 06:57:10AM 5 points [-]

So the result can in fact be recorded and the experiment can be set up so that the skin is (almost) never penetrated.

No. We get a superposition of the result being recorded, and the result not being recorded.

mlionson may be talking about Elitzur-Vaidman bomb-testing:

On average, this will identify all of the dud bombs, explode two thirds of the usable bombs, and identify one third of the usable bombs without detonating them... Kwiat et al. devised a method, using a sequence of polarising devices, that efficiently increases the yield rate to a level arbitrarily close to one.

Comment author: wnoise 17 February 2010 07:09:54AM 2 points [-]

That is the most charitable interpretation. I confess that I did not at all think of that.

Of course, given no further details, and hence assuming standard measurement devices and procedures, this sort of thing really is impossible with needles and arms.

Comment author: mlionson 17 February 2010 07:30:04AM -2 points [-]

The Elitzur-Vaidman bomb testing device is an example of a similar phenomenon. What law of physics precludes the construction of a device that measures blood sugar but with the needle (virtually never) penetrating the skin?

Comment author: mlionson 17 February 2010 08:50:41AM -2 points [-]

And if no law of physics precludes something from being done, then only our lack of knowledge prevents it from being done.

So if there are no laws of physics that preclude developing bomb testing and sugar measuring devices, our arguments against this have nothing to do with the laws of physics, but instead have to do with other parameters, like lack of knowledge or cost. So if the laws of physics do not preclude things form happening, we might as well assume that they can happen, in order to learn from the physics of these possible situations.

So for the purposes of understanding what our physics says can happen, it becomes reasonable to posit that devices have been constructed that can test the activity of Elitzur-Vaidman bombs without (usual) detonation or measure blood sugars without needles (usually) penetrating the skin. It is reasonable to posit this because the known laws of physics do not forbid this.

So those who do not believe in the multiverse but still believe in their own rationality do need to answer the question, "Where is the arm from which the blood was drawn?"

Or, individuals denying the possibility of such a measuring device being constructed need to posit a new law of physics that prevents Elitzur-Vaidman bomb testing devices from being constructed and blood sugar measuring devices (that do not penetrate the skin) from being constructed.

If they posit this new law, what is it?

Comment author: Mitchell_Porter 18 February 2010 02:27:33AM 8 points [-]

In the Elitzur-Vaidman bomb test, information about whether the bomb has exploded does not feed into the experiment at any point. When you shoot photons through the interferometer, you are not directly testing whether the bomb would explode or has exploded elsewhere in the multiverse; you are testing whether the sensitive photon detector in the bomb trigger works.

As wnoise said, to directly gather information from a possible history, the history has to end in a physical configuration identical to the one it is being compared with. The two histories represent two paths through the multiverse, if you wish, with a separate flow of quantum amplitude along each path in configuration space, and then the flows combine and add when the histories recombine by converging on the same configuration.

In the case of an exploded bomb, this means that for a history in which the bomb explodes to interfere with a history in which the bomb does not explode, the bomb has to reassemble somehow! And in a way which does not leave any other physical traces of the bomb having exploded.

In the case of your automated blood glucose meter coupled to a quantum switch, for the history where the reading occurs to interfere with the history where the reading does not occur, the reading and all its physical effects must similarly be completely undone. Which is going to be a problem since the needle pricked flesh and a pain signal was probably conveyed to the subject's brain, creating a memory trace. You said something about "briefly freezing a small component of blood and skin on a live person", so maybe you appreciate this need for total reversibility.

In the case of counterfactual measurements which have actually been performed, very simple quantum systems were involved, simple enough that the reversibility, or the maintenance of quantum coherence, was in fact possible.

However, I totally grant you that the much more difficult macro-superpositions appear to be possible in principle, and that this does pose a challenge for single-world interpretations of quantum theory. They need to either have a single-world explanation for where the counterfactual information comes from, or an explanation as to why the macro-superpositions are not possible even in principle.

Such explanations do in fact exist. I'll show how it works again using the Elitzur-Vaidman bomb test.

The bomb test uses destructive interference as its test pattern. Destructive interference is seen in the dark zones in the double slit experiment. Those are the regions where (in a sum-over-histories perspective) there are two ways to get there (through one slit, through the other slit), but the amplitudes for the two ways cancel, so the net probability is zero. The E-V bomb-testing apparatus contains a beam splitter, a "beam recombiner", and two detectors. It is set up so that when the beam proceeds unimpeded through the apparatus, there is total destructive interference between the two pathways leading to one of the detectors, so the particles are only ever observed to arrive at the other detector. But if you place an object capable of interacting with the particle in one of the paths, that will modify the portion of the wavefunction traveling along that path (part of the wavefunction will be absorbed by the object), the destructive interference at the end will only be partial, and so particles will sometimes be observed to arrive at that detector.

The many-worlds explanation is that when the object is there, it creates a new subset of worlds where the particle is absorbed en route, this disturbs the balance between worlds, and so now there are some worlds where the particle makes it to the formerly forbidden detector.

Now consider John Cramer's transactional interpretation. This interpretation is all about self-consistent standing waves connecting past and future, via a transaction, a handshake across time, between "advanced" and "retarded" electromagnetic potentials (in the case of light). It's like the Novikov self-consistency principle for wormhole histories; events arrange themselves so as to avoid paradox because logically they have to. That's how I understand Cramer's idea.

So, in the transactional framework, how do we explain the E-V bomb test? The apparatus, the experimental setup, defines the boundary conditions for the standing waves. When we have the interferometer with both pathways unimpeded (or with a "dud bomb", which means that the photon detector in its trigger isn't working, which means the photon passes right through it), the only self-consistent outcome is the one where the photon makes it to the detector experiencing constructive interference. But when there is an object in one pathway capable of absorbing a photon, we have three self-consistent outcomes: photon goes to one detector, photon goes to other detector, photon is absorbed by the object (which then explodes if it's an E-V bomb, but that outcome is not part of the transaction, it's an external causal consequence).

In general, the transactional interpretation explains counterfactual measurement or counterfactual computation through the constraint of self-consistency. The presence of causal chains moving in opposite temporal directions in a single history produces correlations and constraints which are nonlocal in space and time. By modulating the boundary conditions we are exploring logical possibilities, and that is how we probe counterfactual realities.

A completely different sort of explanation would be offered by an objective collapse theory like Penrose's. Here, the prediction simply is that such macro-superpositions do not exist. By the way, in Penrose's case, he is not just arbitrarily stipulating that macro-superpositions do not happen. He was led to this position by a quantum-gravity argument that superpositions of significantly different geometries are dynamically undefined. In general relativity, the rate of passage of time is internal to the geometry, but to evolve a superposition of geometries would require some calibration of one geometry's time against the other. Penrose argued that there was no natural way to do this and suggested that this is when wavefunction collapse occurs. I doubt that the argument holds up in string theory, but anyway, for argument's sake let's consider how a theory like this analyzes the E-V bomb-testing experiment. The critical observation is that it's only the photon detector in the bomb trigger which matters for the experiment, not the whole bomb; and even then, it's not the whole photon detector, but just that particular combination of atoms and electrons which interacts with the photon. So the superposition required for the experiment to work is not macro at all, it's micro but it's coupled to macro devices.

This is a really good case study for quantum interpretation; I had to engage in quite a bit of thought and research to analyze it even this much. But the single-world schools of thought are not bereft of explanations even here.

Comment author: ata 15 February 2011 08:43:36PM *  2 points [-]

Interesting quote from Stephen Hawking, apparently he's on board with MWI as the obvious best guess (and with Bayesian reasoning):

HAWKING: I regard [the many worlds interpretation] as self-evidently correct.

T.F.: Yet some don't find it evident to themselves.

HAWKING: Yeah, well, there are some people who spend an awful lot of time talking about the interpretation of quantum mechanics. My attitude — I would paraphrase Göring — is that when I hear of Schrödinger's cat, I reach for my gun.

T.F.: That would spoil the experiment. The cat would have been shot, all right, but not by a quantum effect.

HAWKING (laughing): Yes, it does, because I myself am a quantum effect. But, look: All that one does, really, is to calculate conditional probabilities — in other words, the probability of A happening, given B. I think that that's all the many worlds interpretation is. Some people overlay it with a lot of mysticism about the wave function splitting into different parts. But all that you're calculating is conditional probabilities.

...though I am a bit confused by how he describes it in the last line — doesn't that sound more like non-realism (or at least "shut up and calculate") than MWI?

Comment author: JGWeissman 15 February 2011 08:48:49PM 0 points [-]

Do you have a link to the source? I would be interested it seeing more context.

Comment author: ata 15 February 2011 08:51:06PM 1 point [-]

It's from here, but no further context was given, unfortunately.

Comment author: p4wnc6 19 June 2011 06:55:07AM *  0 points [-]

...though I am a bit confused by how he describes it in the last line — doesn't that sound more like non-realism (or at least "shut up and calculate") than MWI?

Isn't the point of the "best" explanation (in the Bayesian sense) that it is the one most at peace with the "shut up and calculate" mentality? My reaction, which please feel free to disregard, is that nothing could be more "real" than saying something like, "Okay, here's the theory, it's self-evident given our observations. Great. Now shut up and multiply. Onto the next question."

Comment author: Luke_A_Somers 02 May 2012 02:34:43PM 1 point [-]

It's saying that there is no mysticism inherent in MWI - you can be just as practical about it as you would otherwise.