And fear not lest Existence closing your
Account, should lose, or know the type no more;
The Eternal Saki from the Bowl has pour'd
Millions of Bubbles like us, and will pour.
When You and I behind the Veil are past,
Oh, but the long long while the World shall last,
Which of our Coming and Departure heeds
As much as Ocean of a pebble-cast.
-- Omar Khayyam, Rubaiyat
A CONSEQUENTIALIST VIEW OF IDENTITY
The typical argument for cryonics says that if we can preserve brain data, one day we may be able to recreate a functioning brain and bring the dead back to life.
The typical argument against cryonics says that even if we could do that, the recreation wouldn't be "you". It would be someone who thinks and acts exactly like you.
The typical response to the typical argument against cryonics says that identity isn't in specific atoms, so it's probably in algorithms, and the recreation would have the same mental algorithms as you and so be you. The gap in consciousness of however many centuries is no more significant than the gap in consciousness between going to bed at night and waking up in the morning, or the gap between going into a coma and coming out of one.
We can call this a "consequentialist" view of identity, because it's a lot like the consequentialist views of morality. Whether a person is "me" isn't a function of how we got to that person, but only of where that person is right now: that is, how similar that person's thoughts and actions are to my own. It doesn't matter if we got to him by having me go to sleep and wake up as him, or got to him by having aliens disassemble my brain and then simulate it on a cellular automaton. If he thinks like me, he's me.
A corollary of the consequentialist view of identity says that if someone wants to create fifty perfect copies of me, all fifty will "be me" in whatever sense that means something.
GRADATIONS OF IDENTITY
An argument against cryonics I have never heard, but which must exist somewhere, says that even the best human technology is imperfect, and likely a few atoms here and there - or even a few entire neurons - will end up out of place. Therefore, the recreation will not be you, but someone very very similar to you.
And the response to this argument is "Who cares?" If by "me" you mean Yvain as of 10:20 PM 4th April 2012, then even Yvain as of 10:30 is going to have some serious differences at the atomic scale. Since I don't consider myself a different person every ten minutes, I shouldn't consider myself a different person if the resurrection-machine misplaces a few cells here or there.
But this is a slippery slope. If my recreation is exactly like me except for one neuron, is he the same person? Signs point to yes. What about five neurons? Five million? Or on a functional level, what if he blinked at exactly one point where I would not have done so? What if he prefers a different flavor of ice cream? What if he has exactly the same memories as I do, except for the outcome of one first-grade spelling bee I haven't thought about in years anyway? What if he is a Hindu fundamentalist?
If we're going to take a consequentialist view of identity, then my continued ability to identify with myself even if I naturally switch ice cream preferences suggests I should identify with a botched resurrection who also switches ice cream preferences. The only solution here that really makes sense is to view identity in shades of gray instead of black-and-white. An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.
BIG WORLDS
There are various theories lumped together under the title "big world".
The simplest is the theory that the universe (or multiverse) is Very Very Big. Although the universe is probably only 15 billion years old, which means the visible universe is only 30 billion light years in size, inflation allows the entire universe to get around the speed of light restriction; it could be very large or possibly infinite. I don't have the numbers available, but I remember a back of the envelope calculation being posted on Less Wrong once about exactly how big the universe would have to be to contain repeating patches of about the size of the Earth. That is, just as the first ten digits of pi, 3141592653, must repeat somewhere else in pi because pi is infinite and patternless, and just as I would believe this with high probability even if pi were not infinite but just very very large, so the arrangement of atoms that make up Earth would recur in an infinite or very very large universe. This arrangement would obviously include you, exactly as you are now. A much larger class of Earth-sized patches would include slightly different versions of you like the one with different ice cream preferences. This would also work, as Omar Khayyam mentioned in the quote at the top, if the universe were to last forever or a very very long time.
The second type of "big world" is the one posited by the Many Worlds theory of quantum mechanics, in which each quantum event causes the Universe to split into several branches. Because quantum events determine larger-level events, and because each branch continues branching, some these branches could be similar to our universe but with observable macro-scale differences. For example, there might be a branch in which you are the President of the United States, or the Pope, or died as an infant. Although this sounds like a silly popular science version of the principle, I don't think it's unfair or incorrect.
The third type of "big world" is modal realism: the belief that all possible worlds exist, maybe in proportion to their simplicity (whatever that means). We notice the existence of our own world only for indexical reasons: that is, just as there are many countries, but when I look around me I only see my own; so there are many possibilities, but when I look around me I only see my own. If this is true, it is not only possible but certain that there is a world where I am Pope and so on.
There are other types of "big worlds" that I won't get into here, but if any type at all is correct, then there should be very many copies of me or people very much like me running around.
CRYONICS WITHOUT FREEZERS
Cryonicists say that if you freeze your brain, you may experience "waking up" a few centuries later when someone uses the brain to create a perfect copy of you.
But whether or not you freeze your brain, a Big World is creating perfect copies of you all the time. The consequentialist view of identity says that your causal connection with these copies is unnecessary for them to be you. So why should a copy of you created by a far-future cryonicist with access to your brain be better able to "resurrect" you than a copy of you that comes to exist for some other reason?
For example, suppose I choose not to sign up for cryonics, have a sudden heart attack, and die in my sleep. Somewhere in a Big World, there is someone exactly like me except that they didn't have the heart attack and they wake up healthy the next morning.
The cryonicists believe that having a healthy copy of you come into existence after you die is sufficient for you to "wake up" as that copy. So why wouldn't I "wake up" as the healthy, heart-attack-free version of me in the universe next door?
Or: suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot) and in one of those branches, the AI's experiment creates a perfect copy of me at the moment of my death, except healthy. If creating a perfect copy of me causes my "resurrection", then that AI has just resurrected me as surely as cryonics would have.
The only downside I can see here is that I have less measure (meaning I exist in a lower proportion of worlds) than if I had signed up for cryonics directly. This might be a problem if I think that my existence benefits others - but I don't think I should be concerned for my own sake. Right now I don't go to bed at night weeping that my father only met my mother through a series of unlikely events and so most universes probably don't contain me; I'm not sure why I should do so after having been resurrected in the far future.
RESURRECTION AS SOMEONE ELSE
What if the speculative theories involved in Big Worlds all turn out to be false? All hope is still not lost.
Above I wrote:
An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.
I used LeBron James because from what I know about him, he's quite different from me. But what if I had used someone else? One thing I learned upon discovering Less Wrong is that I had previously underestimated just how many people out there are *really similar to me*, even down to weird interests, personality quirks, and sense of humor. So let's take the person living in 2050 who is most similar to me now. I can think of several people on this site alone who would make a pretty impressive lower bound on how similar the most similar person to me would have to be.
In what way is this person waking up on the morning of January 1 2050 equivalent to me being sort of resurrected? What if this person is more similar to Yvain(2012) than Yvain(1995) is? What if I signed up for cryonics, died tomorrow, and was resurrected in 2050 by a process about as lossy as the difference between me and this person?
SUMMARY
Personal identity remains confusing. But some of the assumptions cryonicists make are, in certain situations, sufficient to guarantee personal survival after death without cryonics.
If all copies count as you, then that includes Boltzmann brains who die in the vacuum a second after their formation and copies of you who awaken inside a personally dedicated hell. And this is supposed to provide hope?
There is clearly a sense in which you do not experience what your copies experience. The instance of you who dies in a car crash on the way to your wedding never experiences the wedding itself; that is experienced by the second instance, created from a backup a few weeks later.
Any extension of identity beyond the "current instance" level is therefore an act of imagination or chosen affiliation. Identifying with your copies and almost-copies scattered throughout the multiverse, identifying with your descendants, and identifying with all beings who ever live, all have this in common - "you", defined in the broad sense supplied by your expansive concept of identity, will experience things that "you", defined in the narrow but practical sense of your local instance, will never experience.
Since it is a contradiction to say that you will experience things that you will never experience, it is desirable to perceive very clearly that these expansive identifications are being made by one local instance that is choosing to regard a multiplicity of other distinct beings as other parts of its extended self. Of course, once you perceive this distinction, between local self and global self, and especially once you notice that the same local self can have arbitrarily expansive or delimited beliefs about who gets to be a part of its global self... you might begin to doubt the meaningfulnss of any notion of global self other than "the whole of reality", or indeed you might doubt the meaningfulness of any notion of "global self" at all. Perhaps in reality you are just your local self and that's it; all other identifications are fantasy.
When that attitude is taken to its limit, it usually leads to disconnection between one moment and the next. Each moment's experience is only had by that momentary self. You could make a slogan out of it: "instances are instantaneous", meaning that if you apply this philosophy consistently, you have to deny that the "local self" extends in time.
But this part I don't believe, because I do believe that experiences are extended in time. There is such a thing as change, the flow of one moment into the next, and not just a static difference between static moments each containing its own encapsulated illusion of flow-connectedness to other moments. The reduction of time to simply another spatial coordinate, and the consequent relegation of the experience of time passing to the category of illusion, results from the cultural hypertrophy (that's the opposite of atrophy) of "logical perception" in scientific culture, at the expense of more "phenomenological" capacities, like a sensitivity to the actual form of consciousness. If people took appearances more seriously, their response to the difficulties of reconciling them with scientific theory would be to look for a better theory, not to call the appearances illusory or nonexistent. It's not at all easy even to get the ontology of appearance right, let alone to conceptually reconstruct the ontology by means of which we understand our mathematical theories of nature, so as to include the ontology implied by appearance.
In fact, an extra layer of difficulty has been added by the attempted reduction of epistemology to computation - it means that the epistemological claims of phenomenology, e.g. that we can know that time really passes or that change is real, struggle to get a hearing. Computational epistemology in its existing forms presupposes an inadequate ontology, and therefore offers a new, methodological barrier to any truth from outside that ontology. One needs to remember that computation is about state transitions in state machines, and says nothing about the "intrinsic nature" of those states or how that intrinsic nature may be related to the causality of the state transitions. So any ontology featuring causal interactions between things with states contains computation, in the same way that any ontology containing multiple things contains arithmetic; but you can't bootstrap your way from computation to ontology, just as you can't bootstrap your way from arithmetic to physics.
In my polemic I have strayed far from the original topic of Yvain's post, but any discussion of the ontology of persons eventually has to tackle these "hard problems".
It doesn't seem too much more distressing to believe that there are copies of me being tortured right now, than to believe that there are currently people in North Korea being tortured right now, or other similarly unpleasant facts everyone agrees to be true.
There's a distinction between intuitive identity - my ability to get really upset about the idea that me-ten-minutes-from-now will be tortured - and philosophical identity - an ability to worry slightly about the idea that a copy of me in another universe is getting tortured. This difference isn't just... (read more)