Followup to: Poll: What value extra copies?
For those of you who didn't follow Eliezer's Quantum Physics Sequence, let me reiterate that there is something very messed up about the universe we live in. Specifically, the Many Worlds Interpretation (MWI) of quantum mechanics states that our entire classical world gets copied something like 1040±20 times per second1. You are not a line through time, but a branching tree.
If you think carefully about Descartes' "I think therefore I am" type skepticism, and approach your stream of sensory observations from such a skeptical point of view, you should note that if you really were just one branch-line in a person-tree, it would feel exactly the same as if you were a unique person-line through time, because looking backwards, a tree looks like a line, and your memory can only look backwards.
However, the rules of quantum mechanics mean that the integral of the modulus squared of the amplitude density, ∫|Ψ|2, is conserved in the copying process. Therefore, the tree that is you has branches that get thinner (where thickness is ∫|Ψ|2 over the localized density "blob" that represents that branch) as they branch off. In fact they get thinner in such a way that if you gathered them together into a bundle, the bundle would be as thick as the trunk it came from.
Now, since each copying event creates a slightly different classical universe, the copies in each of the sub-branches will each experience random events going differently. This means that over a timescale of decades, they will be totally "different" people, with different jobs, probably different partners and will live in different places though they will (of course) have your DNA, approximate physical appearance, and an identical history up until the time they branched off. For timescales on the order of a day, I suspect that almost all of the copies will be virtually identical to you, even down to going to bed at the same time, having exactly the same schedule that day, thinking almost all of the same thoughts etc.
MWI mixes copies and probability
When a "random" event happens, either the event was pseudorandom (like a large digit of pi) or it was a copy event, meaning that both (or all) outcomes were realized elsewhere in the wavefunction. This means that in many situations, when you say "there is a probability p of event X happening", what this really means is "proportion p of my copy-children will experience X".
LW doesn't care about copies
In Poll: What value extra copies?, I asked what value people placed upon non-interacting extra copies of themselves, asking both about lock-step identical and statistically identical copies. The overwhelming opinion was that neither were of much value. For example, Sly comments:2
"I would place 0 value on a copy that does not interact with me. This might be odd, but a copy of me that is non-interacting is indistinguishable from a copy of someone else that is non-interacting. Why does it matter that it is a copy of me?"
How to get away with attempted murder
Suppose you throw a grenade with a quantum detonator at Sly. The detonator will sample a qbit in an even superposition of states 1 and 0. On a 0 it explodes, instantly vaporizing sly (it's a very powerful grenade). On a 1, it defuses the grenade and dispenses a $100 dollar note. Suppose that you throw it and observe that it doesn't explode:
(A) does Sly charge you with attempted murder, or does he thank you for giving him $100 in exchange for something that had no value to him anyway?
(B) if he thanks you for the free $100, does he ask for another one of those nice free hundred dollar note dispensers? (This is the "quantum suicide" option
(C) if he says "the one you've already given me was great, but no more please", then presumably if you throw another one against his will, he will thank you for the free $100 again. And so on ad infinitum. Sly is temporally inconsistent if this option is chosen.
The punch line is that the physics we run on gives us a very strong reason to care about the welfare of copies of ourselves, which is (according to my survey) a counter-intuitive result.
EDIT: Quite a few people are biting the quantum suicide bullet. I think I'll have to talk about that next. Also, Wei Dai summarizes:
Another way to think about this is that many of us seem to share the follow three intuitions about non-interacting extra copies, out of which we have to give up at least one to retain logical consistency:
- We value extra copies in other quantum branches.
- We don't value extra copies that are just spatially separated from us (and are not too far away).
- We ought to value both kinds of copies the same way.
- Giving up 1 is the position of "quantum immortality".
- Giving up 2 seems to be Roko's position in this post.
- Giving up 3 would imply that our values are rather arbitrary: there seems to be no morally relevant differences between these two kinds of copies, so why should we value one and not the other? But according to the "complexity of value" position, perhaps this isn't really a big problem.
I might add a fourth option that many people in the comments seem to be going after: (4) We don't intrinsically value copies in other branches, we just have a subjective anticipation of becoming them.
1: The copying events are not discrete, rather they consist of a continuous deformation of probability amplitude in state space, but the shape of that deformation looks a lot like a continuous approximation to a discrete copying event, and the classical rules of physics approximately govern the time evolution of the "copies" as if they were completely independent. This last statement is the phenomenon of decoherence. The uncertainty in the copying rate is due to my ignorance, and I would welcome a physicist correcting me.
2: There were many others who expressed roughly similar views, and I don't hold it as a "black mark" to pick the option that I am advising against, rather I encourage people to honestly put forward their opinions in a spirit of communal learning.
I reject "3" (We ought to value both kinds of copies the same way), but don't think that it is arbitrary at all. Rather it is based off of an important aspect of our moral values called "Separability." Separability is, in my view, an extremely important moral intuition, but it is one that is not frequently discussed or thought about because we encounter situations where it applies very infrequently. Many Less Wrongers, however, have expressed the intuition of separability when stating that they don't think that non-causally connected parallel universe should affect their behavior.
Separability basically says that how connected someone is to certain events matters morally in certain ways. There is some debate as to whether this principle is a basic moral intuition, or whether it can be derived from other intuitions, I am firmly in favor of the former.
That probably sounds rather abstract, so let me give a concrete example: Imagine that the government is considering taking an action that will destroy a unique ecosystem. There are millions of environmentalists who oppose this action, protest against it, and lobby to stop it. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I'd say yes, and I think a lot of people would agree with me.
Now imagine that in a distant galaxy there exist approximately 90 quadrillion alien brain emulators living in a Matrioshka Brain. All these aliens are fervent environmentalists and have a strong preference that no unique ecosystem ever be destroyed. Assume we will never meet these aliens. Should their preference for the ecosystem to not be destroyed be taken into consideration when calculating the utility of this situation? Have they, in a sense, been harmed if the ecosystem is destroyed? I'd say no, even if Omega told me they existed.
What makes these two situations different? I would say that in the first situation the environmentalists possess strong causal connections to the ecosystem in question, while the aliens do not. For this reason the environmentalists' preferences were morally relevant, the aliens' not so.
Separability is really essential for utilitarianism to avoid paralysis. After all, if everyone's desires count equally when evaluating the morality of situations, regardless of how connected they are to them, then there is no way of knowing if you are doing right or not. Somewhere in the universe there is doubtless a vast amount of people who would prefer you not do whatever it is you are doing.
So how does this apply to the question of creating copies in my own universe, versus desiring a copy of me in another universe not be destroyed by a quantum grenade?
Well, in the issue of whether or not to create identical copies in my own universe, I would not spend a cent trying to do that. I believe in everything Eliezer wrote in In Praise of Boredom and place great value on having new, unique experiences. Creating lockstep copies of me would be counterproductive, to say the least.
However, at first this approach seems to run into trouble in MWI. If there are so many parallel universes it stands to reason that I'll be duplicating an experience some other me has already had no matter what I do. Fortunately, the Principle of Separability allows me to rescue my values. Since all those other worlds lack any causal connection to me, they are not relevant in determining whether I am living up to the Value of Boredom.
This allows us to explain why I am upset when the grenade is thrown at me. The copy that was killed had no causal connection to me. Nothing I or anyone else did resulted in his creation, and I cannot really interact with him. So when I assess the badness of his death, I do not include my desire to have unique, nonduplicated experiences in my assessment. All that matters is that he was killed.
So rejecting (3) does not make our values arbitrary, not in the slightest. There is an extremely important moral principle behind doing so, a moral principle that is essential to our system of ethics. Namely, the Principle of Separability.
You say that "separability is really essential for utilitarianism to avoid paralysis" but also that it "is not frequently discussed or thought about because we encounter situations where it applies very infrequently."
I have trouble understanding how both of these can be true. If situations where it applies are very infrequent, how essential can it really be?
To avoid paralysis, utilitarians need some way of resolving intersubjective differences in utility calculation for the same shared world-state. Using "separability" to disc... (read more)