The only downside I can see here is that I have less measure (meaning I exist in a lower proportion of worlds) than if I had signed up for cryonics directly. This might be a problem if I think that my existence benefits others - but I don't think I should be concerned for my own sake.
Yvain is quantum suicidal? I didn't expect that! People whose preferences 'add up to normal' do (implicitly) care about measure.
Quantum suicide seems like a good idea to me if we know that the assumptions behind it (both quantum and identity-related) are true, if we're purely selfish (eg don't care about the bereaved left behind), and if we don't assume our actions are sufficiently correlated with those of others to make everyone try quantum suicide and end up all alone in our own personal Everett branch.
However, I might have the same "It's a good idea, but I am going to refuse to do this for reasons of personal sanity" reaction as I have with Pascal's Mugging.
Quantum suicide seems like a good idea to me if we know that the assumptions behind it (both quantum and identity-related) are true, if we're purely selfish (eg don't care about the bereaved left behind), and if we don't assume our actions are sufficiently correlated with those of others to make everyone try quantum suicide and end up all alone in our own personal Everett branch.
Fortunately, if you combine the second and third potential problems you end up with a solution that eliminates both of them. Then you just have the engineering problem involved in building a bigger death box.
However, I might have the same "It's a good idea, but I am going to refuse to do this for reasons of personal sanity" reaction as I have with Pascal's Mugging.
I hope so. Your position is entirely consistent - I cannot fault it on objective grounds and what you say in your post does directly imply what you confirm in your comment. That said, the preferences you declare here are vastly different to those that I consider 'normal' and so there remains the sneaking suspicion that you are wrong about what you want. That is, that you incorrectly extrapolate your volition.
On the other hand the ...
(Not sure where to put this:) Yvain's position doesn't seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you've never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn't make sense: you're literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) "anthropics makes sense with shorter people". Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that's why those branches are extremely improbable: there's no way you can experience them, quantum suicide or no.
Isn't dissolving the concept of personal identity relatively straightforward?
We know that we've evolved to protect ourselves and further our own interests, because organisms who didn't have that as a goal didn't fare very well. So in this case at least, personal identity is merely a desire to make sure that "this" organism survives.
Naturally, the problem is defining in "this organism". One says, "this" organism is something defined by physical continuity. Another says, "this" organism is something defined by the degree of similarity to some prototype of this organism.
One says, sound is acoustic vibrations. Another says, sound is the sensation of hearing...
There's no "real" answer to the question "what is personal identity", any more than there is a "real" answer to the question "what is sound". You may pick any definition you prefer. Of course, truly dissolving "personal identity" isn't as easy as dissolving "sound", because we are essentially hard-wired to anticipate that there is such a thing as personal identity, and to have urges for protecting it. We may realize on an intellectu...
So if I want to improve the world, it makes sense for me to care about "my own" ... well-being - even though future instances of "me" are actually distinct systems ... because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out.
I think it's clear that there is also terminal value in caring about the well-being of "me". As with most other human psychological drives, it acts as a sloppily optimized algorithm of some instrumental value, but while its purpose could be achieved more efficiently by other means, the particular way it happens to be implemented contributes an aspect of human values that is important in itself, in a way that's unrelated to the evolutionary purpose that gave rise to the psychological drive, or to instrumental value of its present implementation.
(Relevant posts: Evolutionary Psychology, Thou Art Godshatter, In Praise of Boredom.)
I don't personally endorse it as a terminal value, but it's everyone's own decision whether to endorse it or not.
I don't believe it is, at least it's relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn't answer the question of what the correct decision is. "It's everyone's own decision" or "everyone is entitled to their own beliefs" sounds like very bad epistemology.
I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn't seem to address the argument itself.
This, along with the simulation argument, is why I'm not too emotionally stressed out with feelings of impending doom that seem to afflict some people familiar with SIAI's ideas. My subjective anticipation is mollified by the thought that I'll probably either never experience dying or wake up to find that I've been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)
Also, we can extend the argument a bit for those worried about "measure". First, an FAI might be able to recreate people from historical clues (writings, recordings, other's memories of them, etc.). But suppose that's not possible. An FAI could still create a very large number of historically plausible people, and assuming FAIs in other Everett branches do the same, the fact that I probably won't be recreated in this branch will be compensated for by the fact that I'll be recreated in other branches where I currently don't exist, thus preserving or even increasing my overall measure.
My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)
Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I'm pretty worried about "waking up" in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.
Actually, I probably shouldn't have been so optimistic even before the recent events...
I agree recent events don't justify a huge update by themselves if one started with a reasonable prior. It's more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that's why it triggered a big update for me.
Now I’m curious. Does studying history make you update in a similar way?
History is not one of my main interests, but I would guess yes, which is why I said "Actually, I probably shouldn’t have been so optimistic even before the recent events..."
I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.
Agreed. I think I was under the impression that western civilization managed to fix a lot of the especially bad epistemic pathologies in a somewhat stable way, and was unpleasantly surprised when that turned out not to be the case.
prevent all the empty galaxies from going to waste
(Off-topic: Is this a decision theoretic thing or an epistemic thing? That is, do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds, or are you just focusing on the worlds where they are there to pluck because it seems like we can't do nearly as much if the stars aren't real? Because I think I've come up with some good arguments against the latter and was planning on writing a post about it; but if you think the former is the case then I'd like to know what your arguments are, because I haven't seen any really convincing ones. (Katja suggested that the opposite hypothesis—that superintelligences have already eaten the stars and are just misleading us, or we are in a simulation where the stars aren't real—isn't a "simple" hypothesis, but I don't quite see why that would be.) What's nice about postulating that the stars are just an illusion is that it means there probably isn't actually a great filter, and we aren't left with huge anthropic confusions about why we're apparently so special.)
do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds
Assuming most worlds start out lifeless like ours, they must have lots of resources for "plucking" until somebody actually plucks them... I guess I'm not sure what you're asking, or what is motivating the question. Maybe if you explain your own ideas a bit more? It sounds like you're saying that we may not want to try to pluck the stars that are apparently out there. If so, what should we be trying to do instead?
I guess I didn't clearly state the relevant hypothesis. The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They're like a stack of twenties on the ground, and it seems plausible they've already been plucked without our knowing. Maybe my previous comment will make more sense now. I'm wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven't already been eaten, or if it's because even if it's probable that they've actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven't already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn't necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)
The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on.
If that's the case, then I'd like to break out by building our own superintelligence to find and exploit whatever weaknesses might exist in the SIs that are boxing us in, or failing that, negotiate with them for a share of the universe. (Presumably they want something from us, or else why are they doing this?) Does that answer your question?
BTW, I'm interested in the "good arguments" that you mentioned earlier. Can you give a preview of them here?
If anthropics makes any sense, then cryonics in a Big World is still controlling what you should mostly expect to experience. The anthropic escape valves for a version of me who experiences death and remembers not being signed up for cryonics range from Boltzmann brains to lunatics to completely random ancestor simulations, and I think I value experiencing the mainline expected cryonics outcome more highly than I do experiencing these.
And even if anthropics doesn't make sense, then as a matter of decision theory I value actually being around in a substantial fraction of the futures of the current world-state.
If all copies count as you, then that includes Boltzmann brains who die in the vacuum a second after their formation and copies of you who awaken inside a personally dedicated hell. And this is supposed to provide hope?
There is clearly a sense in which you do not experience what your copies experience. The instance of you who dies in a car crash on the way to your wedding never experiences the wedding itself; that is experienced by the second instance, created from a backup a few weeks later.
Any extension of identity beyond the "current instance" level is therefore an act of imagination or chosen affiliation. Identifying with your copies and almost-copies scattered throughout the multiverse, identifying with your descendants, and identifying with all beings who ever live, all have this in common - "you", defined in the broad sense supplied by your expansive concept of identity, will experience things that "you", defined in the narrow but practical sense of your local instance, will never experience.
Since it is a contradiction to say that you will experience things that you will never experience, it is desirable to perceive very clearly that these e...
It doesn't seem too much more distressing to believe that there are copies of me being tortured right now, than to believe that there are currently people in North Korea being tortured right now, or other similarly unpleasant facts everyone agrees to be true.
There's a distinction between intuitive identity - my ability to get really upset about the idea that me-ten-minutes-from-now will be tortured - and philosophical identity - an ability to worry slightly about the idea that a copy of me in another universe is getting tortured. This difference isn't just instrumentally based on the fact that it's easier for me to save me-ten-minutes-from-now than me-in-another-universe; even if I were offered some opportunity to help me-in-another-universe, I would feel obligated to do so only on grounds of charity, not on grounds of selfishness. I'd ground that as some mental program that intuitively makes me care about me-ten-minutes-from-now which is much stronger than whatever rational kinship I can muster with me-in-another-universe. This mental program seems pretty good at dealing with minor breaks in continuity like sleep or coma.
The problem is, once death comes into the picture, the menta...
I like this post! Two comments barely related to the post:
I would be interested in those calculations about how big the universe would have to be to have repeating Earths if anyone recalls where they saw them.
A meta-LessWrongian comment: A great part of the value I get out of LessWrong is that there's always someone out there writing a post or comment tying together various thoughts and musings I've had into coherent essays in a way I don't have the mental discipline to do. So this is a big thanks to all of you writing interesting stuff!
And a comment more directly related to the post:
I can only assume that some other type of intelligence would be bemused by our confusion surrounding these issues in much the same way I'm often bemused by people making hopelessly confused arguments about religion or evolution or whatever.
Other Intelligence: "Of course continuity isn't important! There's no difference between waking up after a coma, being unfrozen, or your clone in a Big World! Sheesh, you humans are messed up in the head!"
(Or maybe OI would argue that there is a difference.)
I would be interested in those calculations about how big the universe would have to be to have repeating Earths if anyone recalls where they saw them.
http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/
If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you. If you're looking for an exact duplicate of a Hubble volume - an object the size of our observable universe - then you should still on average only need to look 10^10^115 lightyears. (These are numbers based on a highly conservative counting of "physically possible" states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)
Google search query: "duplicate earth lightyears away site:lesswrong.com". Estimated time to search and go through first page: 30 seconds.
If you don't care about measure then why try to solve friendly AI? In a VERY BIG world some AI's will turn out to be friendly.
It should be mentioned that when considering things like Cryonics in the Big World, you can't just treat all the other "you" instances as making independent decisions, they'll be thinking similarly enough to you that whatever conclusion you reach, this is what most "you" instances will end up doing. (unless you randomize, and assuming 'most' even means anything)
Seriously, I'd expect people to at least mention the superrational view when dealing with clones of themselves in decide-or-die coordination games.
I have a feeling that I'm missing something and that this is going to be downvoted, but I still have to ask. In the event that a big universe exists, there are numerous people almost exactly like me going about their business. My problem is that what i would call my consciousness doesn't seem to experience their actions. This would seem to me like there is some factor in my existence that is not present in theirs. If a perfect clone was sitting next to me, I wouldn't be able to see my computer through his eyes. I would continue to see it through mine. This chain of experience is the thing I care most to preserve. I have interest in the continued existence of people like me, but for separate reasons.
I know the idea of an "inner listener" is false, but the sensation of such a thing and a continuous stream of experience do exist. I am emotionally tied to those perceptions. I don't know how enthusiastically I can look forward to the future if I won't be able to experience it any more than I can the nearest parallel universe.
This chain of experience is the thing I care most to preserve.
Okay, think of it this way.
You go to sleep tonight, your "chain of experience" is briefly broken. You wake up tomorrow morning, chain of experience is back, you're happy.
But what makes you say "chain of experience is back"? Only that a human being wakes up, notices it has the memories of being pleeppleep, and says "Hey, my chain of experience is back! Good!"
Suppose Omega killed you in your sleep, then created a perfect clone of you. The perfect clone would wake up, notice it has the memories of being pleeppleep, and say "Hey, my chain of experience is back! Good!" Then it would continue living your life.
Right now you have zero evidence that Omega hasn't actually done this to you every single night of your life. So the idea of a "chain of experience", except as another word for your memories, is pretty tenuous.
And if I told you today that Omega had really been doing this to you your whole life, then you would be really scared before going to sleep tonight, but eventually you'd have to do it. And then the next day, your clone would still be pretty scared before going to...
First, I wish people stopped using this untestable Big World nonsense in their arguments.
For things to "add up to normality", your decisions should not be affected by a particular interpretation of QM, and so you should arrive to the same ones by sticking to the orthodox interpretation (or any other). If your argument fails without invoking a version of the MWI, it is not a sound argument, period. Similarly, your belief that in a galaxy far far away you are a three-eyed Pope named LeBron should not affect your decision of whether to sign-up for cryonics here and now.
Your other point is eminently worthwhile: since the cryonic resurrection is manifestly not exact, how much deviation from the original are you prepared to allow while still considering the resurrected object to be you for practical purposes? The answer is not objective in any way. For some losing a single memory is enough to say No, for others retaining even a small fraction of memories is enough for a Yes. The acceptable range on emotions, volitions and physical makeup can also vary widely.
Something oddly relevant: quantum insomnia:
http://quantuminsomnia.blogspot.com/
Of course that guy not being me or you, it is obvious that he is not a quantum insomniac, but there's a rather creepy thought: Today may be the day that you woke up for the last time in your life, and you'll remain awake forever, just because there will always be branch where you are awake, and you after a night's sleep is more different from you falling asleep, than you one second later, still awake. Good night. On second thought, I probably shouldn't joke like this here but fo...
How many bytes in human memory? is a very brief article providing estimates of just that. Evidence from human learning experiments suggests that, after using a very good data compression algorithm, human long term declarative memory holds only a few hundred megabytes.
How much of that information is common knowledge, such as knowledge of the English language, memories from media such as books or television, or knowledge of local buildings and streets, is unclear.
Additional information specific to an individual could be gained from email, internet posts, and...
The simplest is the theory that the universe (or multiverse) is Very Very Big.
Do you mean this in an Occamian way? I suspect not, but I think you should make it clearer.
Anyway, this is a subject I've actually thought about a lot.
A lot of people (including Derek Parfit himself) think continuity is absolutely essential for personal identity / selfhood, but I don't. I've had such terribly marked psychological changes over the years that I cannot even conceive of the answer to the question, "Am I the same person as Grognor from 2009?" being yes in...
You know, Stross tacitly considered an interesting form of resurrection in Accelerando--a hypothetical post-singularity (non-Friendly) AI computes a minimum message length version of You based off any surviving records of what you've done or said (plus the baseline prior for how humans work) and instantiates the result.
I'm having real trouble proving that's not more-or-less me, and what's more, that such a resurrection would feel any different from the inside looking back over its memories of my life.
I still have yet to see anyone adequately address how I am supposed to relate in any way to the magical copy of me a universe away.
If they have a shitty day, I feel nothing. If they have a good day, I feel nothing. If they die, big whoop. When I die, I will not be magically waking up in their body and vice versa. I will be dead.
Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.
Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better wh...
suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot)
How is that a Friendly AI?
Once again, Nick Bostrom's Quantity of experience: brain-duplication and degrees of consciousness comes to the rescue. Cryonics greatly expands the proportion of "you-like" algorithms as opposed to "!you-like" algorithms, for the same reason that quantum russian roulette greatly shrinks that proportion.
Pfsch, silly. I couldn't wake up as someone from another part of the universe - he's already busy waking up as himself :P
Several of your examples are also equivalent to quantum suicide situations. Similar comments about measure apply, except in this case we have a process (cryonics) that actually can restore measure (therefore, we can ignore all differences from our intuitive idea of resurrection).
See Tegmark's "Multiverse Hierarchy": http://arxiv.org/pdf/0905.1283.pdf Also there has been good work done showing that a spatial Big Universe is a realization of the quantum multiverse in 3 spatial dimensions. http://blogs.discovermagazine.com/cosmicvariance/2011/05/26/are-many-worlds-and-the-multiverse-the-same-idea/
Pretty ok article.
It has been years since I've last thought about personal identity. The last time it seemed a pretty reasonable and obvious conclusion to slightly less value the "me" stored in my body and value humans who where similar to me a bit more.
There seemed to be little point in (ceteris paribus) me being willing to spending more to save "my" life compared to saving the life a of a random average human A and not also expending at least something extra on person B who is say half way in "meness" between said random ...
I'm signed up for cryonics, and I believe you're leaving out two crucial motivations for cryonics. These are not particularly smart reasons. You don't logic your way into them. But they are emotionally strong reasons, and emotions are the primary motivators of humans.
The first is that it reduces my fear of death significantly. Perhaps unreasonably so? But the fear of death had been a problem for me for a long time, and now not as much.
The second is that I want OTHERS to be frozen for MY sake. I want to see my parents and brothers and friends again in this ...
As I've said elsewhere, I mostly think that any notion of preserving personal identity (from moment to moment, let alone after my heart stops beating) depends on a willingness to acknowledge some threshold of similarity and/or continuity such that I'm willing to consider anything above that threshold to share that identity.
Defining that threshold such that entities in other universes share my identity means there's lots of me out there, sure. And defining that threshold such that entities elsewhere on Earth right now share my identity means there's lots of me right here.
It's not clear to me why any of that matters.
This is an interesting argument that I have been giving a lot of thought to lately. Are all randomly occuring me-like entities out there in the big universe just as much me as the ones I anticipate in the future?
Well, there's one difference: causality. I'm not so sure a me with no causal relationship to my current self is something I can justifiably consider a future me. When I step in a teleporter that vaporizes my molecules and reconstructs me, I still have a very well-defined causal relationship with that future self -- despite the unorthodox path by wh...
Whenever I think about anthropics I'm always worried that I'm only experiencing thinking about anthropics because I'm thinking about anthropics, as if anthropics itself is the domain of some unfathomably powerful god who can change relative measures at whim and who thinks it's really fun to fuck with philosophers.
There may be another alternative to cryonics that doesn't require a Big World - indirect mind uploading (scroll down to "Using ‘indirect mind uploading’ to avoid staying dead"). The idea is that if you record every second of your life (e.g. on video), using this information a future AI might be able to converge on a specific brain configuration that is close to your original brain. Since only certain brains would say, write, do or think (although we currently can't record thoughts) the things you recorded yourself saying, writing and doing, depen...
Right now I don't go to bed at night weeping that my father only met my mother through a series of unlikely events and so most universes probably don't contain me; I'm not sure why I should do so after having been resurrected in the far future.
Because you understand that you can't change it:
For nothing is more certain, than that despair has almost the same effect upon us with enjoyment, and that we are no sooner acquainted with the impossibility of satisfying any desire, than the desire itself vanishes.
I don't think this claim is true in all cases, ...
Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.
Gradual change feels OK. Less gradual change feels less OK (unless I percieve it as an improvement). Going to sleep at night and knowing that in the morning I will feel a bit differently already makes me nervous. But it's preferable to dying. (But if I could avoid it without bad consequences, I would.) Small changes are good, because more of the future selves will be more similar to my current self.
How exactly does one measure the similarity or the change? Some ...
Well, those copies, they are very very far away, and thus are more different from you than something nearby; it feels you are using the intuitions for things nearby, whose relative position is only a minuscule fraction of their total information content, for the things whose information content is entirely their position information. In principle we can recreate this conversation by iterating every value of every letter on a very powerful computer, but unless there's process for selecting this conversation out of the sea of nonsense, that won't constitute a backup.
Yes, to the degree that you accept the existence of a Big World, together with the usual assumptions about personal identity, you should expect never to die.
Even if there is no Big World, however, no one will ever experience dying anyway. Your total lifespan will be limited, but you will never notice it come to an end. So you might as well think of that limited span as a projection of an infinite lifespan onto an open finite interval. So again, one way or another you should expect never to die.
I'm curious how we distinguish copies of ourselves and near-copies of ourselves from ourselves. I mean, this intuition runs strongly through personal identity discussions: in this post you are identifying possible candidates for "clone of me", not "me". If it was the latter, we'd just go looking for ourselves in the Big World, which is a much simpler problem: "I'm me, and there's 863 of me here and there, some brighter than others." No concerns for them needing to be closer to some canonical source.
But we keep dragging in this...
Why do you care if you continue to exist?
Also, if you don't care about your measure, than why is the Big Universe even necessary? You already know you have a measure of at least how long you've been alive. You can't cease to have ever existed.
This is a restatement of quantum immortality, right?
But this is a slippery slope. If my recreation is exactly like me except for one neuron, is he the same person? Signs point to yes. What about five neurons? Five million? Or on a functional level, what if he blinked at exactly one point where I would not have done so? What if he prefers a different flavor of ice cream? What if he has exactly the same memories as I do, except for the outcome of one first-grade spelling bee I haven't thought about in years anyway? What if he is a Hindu fundamentalist?
These questions apply equally to the person who wakes u...
Since I inherently desire to struggle for life regardless of whether or not my efforts will have any effect this argument does not alter my motivations or the decisions I will make. I'm perfectly fine with struggling for a lost cause if the process of struggling is either valuable or inevitable.
In a Big World, the process of struggling is all that we have, and success doesn't matter so much.
-- Omar Khayyam, Rubaiyat
A CONSEQUENTIALIST VIEW OF IDENTITY
The typical argument for cryonics says that if we can preserve brain data, one day we may be able to recreate a functioning brain and bring the dead back to life.
The typical argument against cryonics says that even if we could do that, the recreation wouldn't be "you". It would be someone who thinks and acts exactly like you.
The typical response to the typical argument against cryonics says that identity isn't in specific atoms, so it's probably in algorithms, and the recreation would have the same mental algorithms as you and so be you. The gap in consciousness of however many centuries is no more significant than the gap in consciousness between going to bed at night and waking up in the morning, or the gap between going into a coma and coming out of one.
We can call this a "consequentialist" view of identity, because it's a lot like the consequentialist views of morality. Whether a person is "me" isn't a function of how we got to that person, but only of where that person is right now: that is, how similar that person's thoughts and actions are to my own. It doesn't matter if we got to him by having me go to sleep and wake up as him, or got to him by having aliens disassemble my brain and then simulate it on a cellular automaton. If he thinks like me, he's me.
A corollary of the consequentialist view of identity says that if someone wants to create fifty perfect copies of me, all fifty will "be me" in whatever sense that means something.
GRADATIONS OF IDENTITY
An argument against cryonics I have never heard, but which must exist somewhere, says that even the best human technology is imperfect, and likely a few atoms here and there - or even a few entire neurons - will end up out of place. Therefore, the recreation will not be you, but someone very very similar to you.
And the response to this argument is "Who cares?" If by "me" you mean Yvain as of 10:20 PM 4th April 2012, then even Yvain as of 10:30 is going to have some serious differences at the atomic scale. Since I don't consider myself a different person every ten minutes, I shouldn't consider myself a different person if the resurrection-machine misplaces a few cells here or there.
But this is a slippery slope. If my recreation is exactly like me except for one neuron, is he the same person? Signs point to yes. What about five neurons? Five million? Or on a functional level, what if he blinked at exactly one point where I would not have done so? What if he prefers a different flavor of ice cream? What if he has exactly the same memories as I do, except for the outcome of one first-grade spelling bee I haven't thought about in years anyway? What if he is a Hindu fundamentalist?
If we're going to take a consequentialist view of identity, then my continued ability to identify with myself even if I naturally switch ice cream preferences suggests I should identify with a botched resurrection who also switches ice cream preferences. The only solution here that really makes sense is to view identity in shades of gray instead of black-and-white. An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.
BIG WORLDS
There are various theories lumped together under the title "big world".
The simplest is the theory that the universe (or multiverse) is Very Very Big. Although the universe is probably only 15 billion years old, which means the visible universe is only 30 billion light years in size, inflation allows the entire universe to get around the speed of light restriction; it could be very large or possibly infinite. I don't have the numbers available, but I remember a back of the envelope calculation being posted on Less Wrong once about exactly how big the universe would have to be to contain repeating patches of about the size of the Earth. That is, just as the first ten digits of pi, 3141592653, must repeat somewhere else in pi because pi is infinite and patternless, and just as I would believe this with high probability even if pi were not infinite but just very very large, so the arrangement of atoms that make up Earth would recur in an infinite or very very large universe. This arrangement would obviously include you, exactly as you are now. A much larger class of Earth-sized patches would include slightly different versions of you like the one with different ice cream preferences. This would also work, as Omar Khayyam mentioned in the quote at the top, if the universe were to last forever or a very very long time.
The second type of "big world" is the one posited by the Many Worlds theory of quantum mechanics, in which each quantum event causes the Universe to split into several branches. Because quantum events determine larger-level events, and because each branch continues branching, some these branches could be similar to our universe but with observable macro-scale differences. For example, there might be a branch in which you are the President of the United States, or the Pope, or died as an infant. Although this sounds like a silly popular science version of the principle, I don't think it's unfair or incorrect.
The third type of "big world" is modal realism: the belief that all possible worlds exist, maybe in proportion to their simplicity (whatever that means). We notice the existence of our own world only for indexical reasons: that is, just as there are many countries, but when I look around me I only see my own; so there are many possibilities, but when I look around me I only see my own. If this is true, it is not only possible but certain that there is a world where I am Pope and so on.
There are other types of "big worlds" that I won't get into here, but if any type at all is correct, then there should be very many copies of me or people very much like me running around.
CRYONICS WITHOUT FREEZERS
Cryonicists say that if you freeze your brain, you may experience "waking up" a few centuries later when someone uses the brain to create a perfect copy of you.
But whether or not you freeze your brain, a Big World is creating perfect copies of you all the time. The consequentialist view of identity says that your causal connection with these copies is unnecessary for them to be you. So why should a copy of you created by a far-future cryonicist with access to your brain be better able to "resurrect" you than a copy of you that comes to exist for some other reason?
For example, suppose I choose not to sign up for cryonics, have a sudden heart attack, and die in my sleep. Somewhere in a Big World, there is someone exactly like me except that they didn't have the heart attack and they wake up healthy the next morning.
The cryonicists believe that having a healthy copy of you come into existence after you die is sufficient for you to "wake up" as that copy. So why wouldn't I "wake up" as the healthy, heart-attack-free version of me in the universe next door?
Or: suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot) and in one of those branches, the AI's experiment creates a perfect copy of me at the moment of my death, except healthy. If creating a perfect copy of me causes my "resurrection", then that AI has just resurrected me as surely as cryonics would have.
The only downside I can see here is that I have less measure (meaning I exist in a lower proportion of worlds) than if I had signed up for cryonics directly. This might be a problem if I think that my existence benefits others - but I don't think I should be concerned for my own sake. Right now I don't go to bed at night weeping that my father only met my mother through a series of unlikely events and so most universes probably don't contain me; I'm not sure why I should do so after having been resurrected in the far future.
RESURRECTION AS SOMEONE ELSE
What if the speculative theories involved in Big Worlds all turn out to be false? All hope is still not lost.
Above I wrote:
I used LeBron James because from what I know about him, he's quite different from me. But what if I had used someone else? One thing I learned upon discovering Less Wrong is that I had previously underestimated just how many people out there are *really similar to me*, even down to weird interests, personality quirks, and sense of humor. So let's take the person living in 2050 who is most similar to me now. I can think of several people on this site alone who would make a pretty impressive lower bound on how similar the most similar person to me would have to be.
In what way is this person waking up on the morning of January 1 2050 equivalent to me being sort of resurrected? What if this person is more similar to Yvain(2012) than Yvain(1995) is? What if I signed up for cryonics, died tomorrow, and was resurrected in 2050 by a process about as lossy as the difference between me and this person?
SUMMARY
Personal identity remains confusing. But some of the assumptions cryonicists make are, in certain situations, sufficient to guarantee personal survival after death without cryonics.