If you believe the MWI [1] you should care about the future a lot more than the present. Imagine you're considering whether to take a break and eat some chocolate in an hour or in two. You'll get similar enjoyment out of both choices, so you might think it doesn't matter. But if every quantum event between one and two hours from now will branch the universe, and there are lots of such events, in two hours there would be hugely many more yous to experience your chocolate break than in only one hour. The MWI implies we should be willing to make substantial sacrifices in terms of current happiness for the benefit of our future selves. In other words, your preference for investing probably isn't strong enough.

In trying to apply this to altruism you do need to be careful. Some charities are more like spending, in that their benefits are mostly in the present, while others are like investing. If I donate to the Against Malaria Foundation to distribute mosquito nets, the main benefits are preventing current or near-future people from dying. There are probably some long term effects, like a stronger economy when you have fewer people sick, but they're not the goal or the main effect. On the other hand the Future of Humanity Institute, a charity trying to prevent existential risk, is much more like an investment in that nearly all its benefit (which is really hard to predict or quantify) goes to future people. Metacharities promoting effective altruism, like 80,000 hours, Giving What We Can, and GiveWell, are another sort of investment-like charity, influencing people's future giving. And then there's the option of straight up monetary investing now and donating later.

If you accept the MWI you should be evaluating your altruistic options primarily on their future effects, with more emphasis on farther-future ones.

I also posted this on my blog


[1] Which I still don't know enough about to have an opinion on the truth of.

New Comment
47 comments, sorted by Click to highlight new comments since: Today at 3:26 AM

But "measure" is conserved. Your world now has a certain quantity of quantum measure, and that fixed amount of measure is divided up among all its descendant worlds.

(Disclaimer: My attempt to engage with the logic of MWI does not imply any endorsement of MWI ontology, or even any endorsement of the claim that there is a consistent theory here.)

Why should I value people in inverse proportion to the measure of their world?

The idea is that there is a conserved amount of reality-stuff, and each time the world branches, the branches are thinner; they individually contain less reality-stuff than previous generations. You value a branch according to the state of the world in that branch, but then you give a weight to that branch in your calculations which depends on its thickness. This is analogous with expected utility calculations in which utility and probability enter as separate factors, except that "amount of reality-stuff" replaces probability.

One way this would make sense is if "amount of reality-stuff" was simply "number of copies of that world". Then you would be treating, and valuing, each world equally, but some worlds would be duplicated in excess of others, and so that type of world would count for more in your decision-making, not because you valued it more, but simply because it existed more often in the part of the multiverse downstream from the present moment.

Incidentally, this way of thinking would also apply to a Bohmian multiverse, in which there are no splitting worlds, but in which the Bohmian histories sometimes converge and sometimes diverge (that is, while always remaining separate self-contained worlds, sometimes they approach each other in configuration space, and sometimes they move apart). In this framework, the quantum probability density of a configuration would be the density with which the Bohmian histories cluster around that point in configuration space, and the wavefunction works as in the Copenhagen interpretation, it is a way for someone in an individual Bohmian world to reason under uncertainty. (Though if you chose to identify with the whole ensemble of your subjective duplicates throughout the Bohmian multiverse, you could to some extent preserve the MWI perspective whereby you are causally responsible for a whole ensemble of worlds, and not just for the one that this instance of you inhabits.)

One indication that "conservation of reality-stuff" may be the right principle, rather than "worlds increasing in number all the time", is that quantum mechanics only indicates relative frequencies of different outcomes. It can tell you that outcome A is twice as likely as outcome B, but it doesn't tell you the absolute number of A-worlds and B-worlds. If you had a world-splitting model, you would be free to invent your own extra law of nature about how the absolute number of worlds changes over time - bearing in mind that identical worlds can merge, so the number of worlds can decrease as well as increase. So long as the relative frequencies match quantum theory, it would be impossible to tell whether an equal-probability split in two directions produced 2 worlds, 2 billion worlds, or (2 times infinity) worlds; but according to your moral calculus, this unobservable absolute number of worlds would be extremely significant.

Returning to what people actually say about MWI, there is a tendency for MWI advocates not to talk of literally duplicated worlds, but to nonetheless treat the "quantum measure" of a world in exactly the same way that you would treat the number of duplicates of a world. If one branch has greater "measure" than another branch, then there's more of it, or it's more real, or some similar phrase will be employed. The fundamental reason for this is to be able to explain why observed probabilities are not uniform. If you break up the wavefunction into a set of basis functions (that are the "worlds") and you then treat the basis functions equally, that means that each possible world exists equally and should be of equal probability. But some things happen more often than others, to a degree that is described by the measure, so people are forced to say that some worlds count for more than others.

The reason that people don't normally justify this unequal treatment, as due to unequal numbers of duplicates being produced at branchings, is that the wavefunction contains no such phenomenon. You could postulate that the ontologically correct decomposition of the wavefunction into individual worlds always assigns equal amounts of measure to all the individual worlds, but this would involve postulating extra structure in the theory, which MWI advocates are loath to do.

Personally I find MWI advocates to be shockingly indifferent to the details of how worlds split. If the notion of world is to be taken seriously, it ought to be a mathematically exact notion. They could look for turning points (in the calculus sense) in the wavefunction, in order to identify objective boundaries and objective transitions from one world to two, e.g. when one local minimum splits into two, but once again, no-one ever follows that line of inquiry. In general, the many-worlds interpretation has a similar psychological function to the Copenhagen interpretation, namely, it's a fuzzy concept that sounds like it might make sense, so it allows users of quantum mechanics to get on with their lives and not worry about foundations. And so whole decades can pass without physicists being forced to confront the question of what the state of the unobserved electron is, or of exactly when it is that one world becomes two.

Thanks for writing this. Convinced.

Personally I find MWI advocates to be shockingly indifferent to the details of how worlds split. If the notion of world is to be taken seriously, it ought to be a mathematically exact notion.

This might be nice, but we have to deal with what's actually the case. Wave packets simply don't divide into two at one exact instant. And if "it all adds up to normality" its not clear what use there is in introducing an arbitrary definition that allows you to say that a wave function represents one world at time t and two worlds at time t+epsilon. Whatever aspects of the wave function I care about, they only change by an order-epsilon quantity during this time interval. We could introduce mathematical function that takes in a wave function and outputs a discrete integer we call "number of worlds," but I wouldn't care very much about the output of this function. Even if I accepted that the "number of worlds" had executed a discrete jump from one to two, the worlds haven't diverged in any aspect by more than an order-epsilon difference.

Maybe we should call it the "Many-Blob Interpretation." That cries out for much less mathematical exactness.

And so whole decades can pass without physicists being forced to confront the question of what the state of the unobserved electron is, or of exactly when it is that one world becomes two.

Both Copenhagen and MWI answer that the "the state of an unobserved electron" is given by its wave function. Classical intuitions might suggest that an unobserved electron ought to have a definite, if unknown, position, but that's a failure of classical intuitions, not Copenhagen or MWI.

we have to deal with what's actually the case

Making excuses for an incomplete theory is not my idea of how to deal with reality. You can't just assert that a theory adds up to normality, you have to show that it does. And saying that you don't care about the details has no bearing on the logical need for such details to exist in a complete theory.

Both Copenhagen and MWI answer that the "the state of an unobserved electron" is given by its wave function.

In the original version of Copenhagen, the wavefunction is the state of the observer's knowledge, not the state of the electron. It's when an observable takes a definite value that you can talk about the electron having a state. "Copenhagen wavefunction realism" - the theory that the wavefunction is the physical state and that it is caused to collapse by "observation" - is a later development, possibly due to von Neumann.

Classical intuitions might suggest that an unobserved electron ought to have a definite, if unknown, position, but that's a failure of classical intuitions

The question was not, what is the electron's position; the question was, what is the electron's state. You are free to say that the electron has no position at a certain time, but if you think that it still exists, it had better have some property. And this is an issue on which original-Copenhagen was silent.

Making excuses for an incomplete theory is not my idea of how to deal with reality. You can't just assert that a theory adds up to normality, you have to show that it does.

What sorts of explanations should MWI provide in order to be complete, and in what sense are you worried that MWI does not add up to normality?

My point above was that it does add up to normality. When worlds are not splitting, we just have standard QM that all the interpretations agree upon. And when worlds are splitting, no valuation you make about a wave function actually depends on the exact moment that two worlds split. But you write:

saying that you don't care about the details has no bearing on the logical need for such details to exist in a complete theory.

I think this analogy illustrates my dissatisfaction with this objection:

Imagine for a moment a classical universe with the following interesting physics: instead of point particles matter is composed of extended blobs. The blobs have no definite boundary; they just have exponential tails that trail off as you move away from the center of the blob. The nontrivial physics arises from the fact that the blobs can collide and merge with each other, or fission into separate blobs, or bounce off each other, etc., according to some underlying field equation (the blobs are bound configurations of field energy). But note that because the blobs have no definite boundaries, any merging/fissioning/scattering process proceeds over a finite length of time, instead of happening instantaneously.

Physicists in this universe develop the theory of blobs and eventually discover the underlying field equation governing all aspects of blob physics. But some insist that the interpretation of this equation is incomplete, for it doesn't give a mathematically exact answer to the question "when does a fissioning blob become two blobs?"

In my opinion, these objectors are misguided. Yes, the interpretation does not answer this. For one thing, the question has no definite answer, because blobs have no definite boundaries and fission over a finite period of time in a continuous fashion. But more importantly, it's not a question the theory should answer. Blobs are not fundamental objects: the fundamental object is the underlying field. The theory rightly speaks only of the underlying field, and does not answer the question of "when one blob becomes two" any more than it answers the question of "when one biological cell becomes two." Blobs turned out to be merely a useful organizing principle for understanding the behavior of the underlying field.

The analogy to QM is fairly exact, I think. In QM we also have found the Schrodinger equation that describes in full detail the evolution of the wave function, the underlying field. We find that blobs of probability amplitude in the wave function tend to fission into separate blobs. We call the blobs "worlds." It would be a mistake to want the theory to answer the question of "when one world becomes two" in exactly the same way it would be a mistake to expect the theory in the analogy to answer "when one blob becomes two."

"Copenhagen wavefunction realism" - the theory that the wavefunction is the physical state and that it is caused to collapse by "observation" - is a later development, possibly due to von Neumann.

OK--I accept this.

The problem is that there is a copy of you inside each "blob". So if there is no objective moment of splitting, then during the period of fission, there is no objective answer to the question, is there one copy of you in existence, or are there two copies of you in existence? That is absurd because any instance of your consciousness is "inside" exactly one copy, and there is an instance of consciousness inside each copy that exists, and so saying that the number of copies is not an objective fact implies that whether or not a particular conscious being exists is not an objective fact, which implies that whether or not you exist is a question without an objective answer, which is absurd.

So if there is no objective moment of splitting, then during the period of fission, there is no objective answer to the question, is there one copy of you in existence, or are there two copies of you in existence?

What's wrong with saying that there are (e.g.) 1½ half copies of me in existence? That's an objective answer.

which implies that whether or not you exist is a question without an objective answer, which is absurd.

Suppose that somebody is hooked up to a life-support device so that they can continue to live even if the (e.g.) breath-regulating areas in their brain cease to function. Now start selectively deleting their brain cells one by one, until there are none left. At which point do they cease to exist?

As far as I know, there isn't any such a point: their consciousness will just gradually fade away, and there isn't an objective answer to when exactly that will happen.

If nothing else, you can't control the words, just the measure. Every possible world exists, but they all have different measure.

We don't really know what the deal with Born's rule is, but I strongly suspect that whatever reason there is for it is a good reason to value worlds of higher measure. Maybe there are more worlds with higher measure, and there's just so many worlds that it just looks like one continuous wave (sort of like how a water ripple is made of discrete atoms). Maybe it's something nobody thought of yet.

While seconding the issue of measure vs. worlds I'll also say that people should not have a strong, a priori expectation that a moral theory based on evolved and culturally transmitted heuristics will be preserved in a final ontology or be able to take exotic cosmologies in their domain.

Posts like this are yet another reason why I see the quantum sequence in general and the MWI idea in particular as a net negative to this forum. Instead of discussing MWI as a neat idea which may or may not be a useful model some day, EY forcefully rams it down the readers' throat, leaving vulnerable souls indoctrinated and confused.

Vivid though that image is (subtle trolling... go!), it's possible that merely activating the idea was all that happened in this case. The "branching tree" analogy is common in popular culture.

Many-worlds implies the future matters more

No it doesn't. Thinking 'Many Worlds' must change your preferences at all probably means you do not understand the concept.

In this particular instance the mistake is in counting worlds instead of measuring.

I have to re-read your post, but off the top it's hard to imagine that using measure will dictate my preferences (though it might affect them). E.g. I find the thought of some of my dead friends having better lives somewhere else comforting, and it seems this might be translatable into a preference (though I haven't tried yet).

I don't see anything in the link that's relevant to the question at hand.

The pointer to read about measure was helpful, though.

two hours there would be hugely many more yous to experience your chocolate break than in only one hour.

But wouldn't hugely more have experienced the chocolate break if I do it earlier?

But if every quantum event between one and two hours from now will branch the universe, and there are lots of such events, in two hours there would be hugely many more yous to experience your chocolate break than in only one hour.

Worlds join in the MWI too. The splitting is commonly thought to be more frequent than the joining, but the joining rate depends on the initial configuration, which is unknown - so the issue is rather speculative.

Whether this "splitting" is more common or not makes no difference to my preferenecs.

This is a good (though not new) reductio ad absurdum of the "worlds can be counted and should obviously all be treated as having the same measure" thinking that leads to Eliezer's claims that the Born rule is mysterious, and to theories like Mangled Worlds.

I doubt that Yudkowsky and Hanson believe that "worlds can be counted and should obviously all be treated as having the same measure".

I'm not confident that my memory here is correct, but what do you think they think instead?

I can't really speak for them, but I somewhat believe that they believe what I believe, i.e., that the many-worlds interpretation of nonrelativistic QM models the world as a function from a (usually nondiscrete) configuration space to the complex numbers, and the norm-squared of that function is the measure that predicts the outcomes of experiments. So there's no well-defined way to count worlds in general, although in some circumstances it may be helpful to refer to a small region of the configuration space as a "world", and different "worlds" can have different measures.

From the Quantum Physics Sequence:

Decoherence is implicit in quantum physics, not an extra postulate on top of it, and quantum physics is continuous. Thus, "decoherence" is not an all-or-nothing phenomenon—there's no sharp cutoff point. Given two blobs, there's a quantitative amount of amplitude that can flow into identical configurations between them. This quantum interference diminishes down to an exponentially tiny infinitesimal as the two blobs separate in configuration space.

I'm still confused by the Born rule. (The worlds where the Born rule makes good predictions have a lot of L^2 measure. But they have very little L^1 measure! The lion's share of the L^1 measure is held by worlds where an L^1 version of the Born rule holds. Why does our experience accord with the Born rule?) But I have more reading to do.

I don't know much about Mangled Worlds.

What privileges L^1 measure if not world-counting? I just looked up mangled worlds again, and Hanson explicitly uses world-counting; at least, he seems to admit that worlds don't have well-defined numbers, but he thinks this isn't a problem for reasons I don't really understand.

(The point of my aside about the Born probabilities is that neither the L^1 norm nor the L^2 norm are privileged. (At least, I don't understand why our experiments favor one of them over the other.) I could just as easily have talked about the L^3 norm.)

Looking at that Mangled Worlds page, I see that you're right — Hanson is talking about a finite number of worlds. And as far as I can tell, every world that exists is equally probable, which would correspond to an L^0 norm? I don't really understand the proposal, though.

Sorry, I guess I got confused about what an L^1 norm meant. My non-confident recollection is that Eliezer believes, not just that we have no idea how to assign measure to worlds, but that our best idea for assigning measure to worlds actively conflicts with the Born rule, except if there exists some sort of mechanism like world-mangling. His endorsement of world-mangling as a possible solution suggests to me that he agrees with its world-counting assumption.

Restructuring your choice to reflect an implication of your claim: You have a piece of chocolate you can eat now, or in one hour you can repeat this decision.

At what point does your utility function say - "Eat this chocolate!"?

You could make the same point with wine in classical physics. It tastes better if you wait longer, but how long should you wait?

You don't even need an unbounded utility function. Perhaps the value at time t is 1-2^-t.

MWI is, in my opinion, preferable to the standard interpretation because it explains more. However, if it ever becomes possible to empirically test different quantum interpretations, you should be prepared to drop it like it's hot.

I don't think MWI works that way, but even if it did, I don't think your logic necessarily follows.

Lets say that worlds duplicate every 2 seconds. Every minute that turns each single world into a billion. If I wait one minute to eat a cookie, that means that anywhere from 0 to a billion copies of 'me' get the utility of a cookie when they eat it. However, if I eat a cookie now, I'm securing the utility for every one of those billion possible Everett Branches. The utility I get now is multiplied into every future branch.

If utility were money, you'd be up against 100,000,000,000% inflation each minute. However, your interest rate is 100,000,000,000% per minute too. In the end, it just evens out, so that MWI shouldn't change your time preference either way.

if I eat a cookie now, I'm securing the utility for every one of those billion possible Everett Branches. The utility I get now is multiplied into every future branch.

The pleasure of eating chocolate is temporary, more in the act than than in the memory. If you eat it in a minute the billion future yous get the joy of the act and then later the joy of the memory, while if you eat it now current you gets the joy of the act and the billion future yous only get the joy of the memory.

Isn't this already a consequence of population growth*? If we believe population growth will continue, things that will last a long time and impact the world are of much higher (or lower, if they're negative!) value than things that just impact people alive today.

*Or even long-term stability of future populations. If we think society will stabilize at the same number of people as today, but will go on for millenia, things that help people over millenia will cause much more good.

Isn't this already a consequence of population growth

Population growth does shift us some towards investing over saving, but if the world were splitting on every quantum event we would need to shift much more.