Abnormal Cryonics
Written with much help from and , in response to various themes here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
- and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
- If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
- A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
- Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being indifferent to cryonics due to personal identity considerations .) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an , but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
- That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.
Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:
- Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to etc. represents as obvious an error as, for instance, religion. The possibility of a third option here shouldn't be ignored.
- People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the defensive makes one less likely to their own irrationalities, if irrationalities they are. (See also: A Suite of Pragmatic Considerations in Favor of Niceness)
- As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.
Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.
2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.
3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.
4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (365)
As yet another media reference. I just rewatched the Star Trek TNG episode 'the neutral zone' which deals with recovery of 3 frozen humans from our time. It was really surprising to me how much disregard for human life is shown in this episode. "Why did you recover them, they were already dead". "Oh bugger, now that you revived/healed them we have to treat them as humans". Also surprising is how much insensibility in dealing with them is shown. When you awake someone from an earlier time you might send the aliens and the robots out of the room.
I am kind of disturbed by the idea of cryonics. Wouldn't it be theoretically possible to prove they don't work, assuming that they really don't. If the connections between neurons are lost in the process, then you have died.
Why?
If it cannot work, then we would expect to find evidence that it cannot work, yes. But it sounds like you're starting from a specific conclusion and working backwards. Why do you want to "prove [it doesn't] work"?
Alcor's FAQ has some information on the evidence indicating that cryonics preserves the relevant information. That depends on the preservation process starting quickly enough, though.
Because if it doesn't, its a waste of time.
Is it so irrational to not fear death?
Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.
Re: "Is it so irrational to not fear death?"
Fear of death should be "managable":
http://en.wikipedia.org/wiki/Terror_management_theory#Criticism
No, that could be perfectly rational, but many who claim not to fear death tend to look before crossing the road, take medicine when sick and so on.
It is rational for a being-who-has-no-preference-for-survival, but it's not obvious that any human however unusual or deformed can actually have this sort of preference.
Lots of people demonstrate a revealed preference for non-survival by committing suicide and a variety of other self-destructive acts; others willingly choose non-survival as the means towards an altruistic (or some other sort of) goal. Or do you mean that it is not obvious that humans could lack the preference for survival even under the most favorable state of affairs?
Revealed preference as opposed to actual preference, what they would prefer if they were much smarter, knew much more, had unlimited time to think about it. We typically don't know our actual preference, and don't act on it.
If the actual preference is neither acted upon, nor believed in, how is it a preference?
It is something you won't regret giving as a goal to an obsessive world-rewriting robot that takes what you say its goals are really seriously and very literally, without any way for you to make corrections later. Most revealed preferences, you will regret, exactly for the reasons they differ from the actual preferences: on reflection, you'll find that you'd rather go with something different.
See also this thread.
That definition may be problematic in respect to life-and-death decisions such as cryonics: Once I am dead, I am not around to regret any decision. So any choice that leads to my death could not be considered bad.
For instance, I will never regret not having signed up for cryonics. I may however regret doing it if I get awakened in the future and my quality of life is too low. On the other hand, I am thinking about it out of sheer curiosity for the future. Thus, signing up would simply help me increasing my current utility by having a hope of more future utility. I am just noticing, this makes the decision accessible to your definition of preference again, by posing the question to myself: "If I signed up for cryonics today, would I regret the [cost of the] decision tomorrow?"
This, however, is not the usual meaning of the term "preference." In the standard usage, this word refers to one's favored option in a given set of available alternatives, not to the hypothetical most favorable physically possible state of the world (which, as you correctly note, is unlikely to be readily imaginable). If you insist on using the term with this meaning, fair enough; it's just that your claims sound confusing when you don't include an explanation about your non-standard usage.
That said, one problem I see with your concept of preference is that, presumably, the actions of the "obsessive world-rewriting robot" are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn't make my preference consistent with the world; a robot implementing my preference would have to understand this.
As with most (all?) questions of whether an emotion is rational, it depends on what you value and what situation you're facing. If you can save a hundred lives by risking yours, and there's no less risky way nor (hypothetically) any way for you to save more people by other means while continuing to live, and you want to save lives, and if fear of death would stop you from going through with it, then it's irrational to fear death in that case. But in general, when you're not in a situation like that, you should feel as strongly as necessary whatever emotion best motivates you to keep living and avoid things that would stop you from living (assuming you like living). Whether that's fear of death or love of life or whatever else, feel it.
If you're talking about "fear of death" as in constant paranoia over things that might kill you, then that's probably irrational for most people's purposes. Or if you're not too attached to being alive, then it's not too irrational to not fear death, though that's an unfortunate state of affairs. But for most people, generally speaking, I don't see anything irrational about normal levels of fear of death.
(Keeping in mind the distinction between believing that you are not too attached to being alive and actually not having a strong preference for being alive, and the possibility of the belief being incorrect.)
Yes, it seems to be irrational, even if you talk about fear in particular and not preferring-to-avoid in general. (See also: Emotion, Reversal test.)
Since I can see literally nothing to fear in death - in nonexistence itself - I don't really understand why cryonics is seen by so many here as such an essentially "rational" choice. Isn't a calm acceptance of death's inevitability preferable to grasping at a probably empty hope of renewed life simply to mollify one's instinct for survival? I live and value my life, but since post-death I won't be around to feel one way or another about it, I really don't see why I should not seek to accept death rather than counter it. In its promise of "eternal" life, cryonics has the whiff of religion to me.
It's certainly best to accept that death is inevitable if you know for a fact that death is inevitable. Which emotion should accompany that acceptance (calm, depression, etc.) depends on particular facts about death - and perhaps some subjective evaluation.
However, the premise seems very much open to question. Death is not "inevitable", it strikes me as something very much evitable, that is which "can be avoided". People used to die when their teeth went bad: dental care has provided ways to avoid that kind of death. People used to die when they suffered infarctus, the consequences of which were by and large unavoidable. Fibrillators are a way to avoid that. And so on.
Historically, every person who ever lived has died before reaching two hundred years of age; but that provides no rational grounds for assuming a zero probability that a person can enjoy a lifespan vastly exceeding that number.
Is it "inevitable" that my life shall be confined to a historical lifespan? Not (by definition) if there is a way to avoid it. Is there a way to avoid it? Given certain reasonable assumptions as to what consciousness and personal identity consist of, there could well be. I am not primarily the cells in my body, I am still me if these cells die and get replaced by functional equivalents. I suspect that I am not even primarily my brain, i.e. that I would still be me if the abstract computation that my brain implements were reproduced on some other substrate.
This insight - "I am a substrate independent computation" - builds on relatively recent scientific discoveries, so it's not surprising it is at odds with historical culture. But it certainly seems to undermine the old saw "death comes to all".
Is it rational to feel hopeful once one has assigned substantial probability to this insight being correct? Yes.
The corollary of this insight is that death, by which I mean information theoretical death (which historically has always followed physical death) holds no particular horrors. It is nothing more and nothing less than the termination of the abstract computation I identify with "being me". I am much more afraid of pain that I am of death, and I view my own death now with something approaching equanimity.
So it seems to me that you're setting up a false opposition here. One can live in calm acceptance of what death entails yet fervently (and rationally) hope for much longer and better life.
Good arguments and I largely agree. However postponable does not equal evitable. At some point any clear minded self (regardless of the substratum) is probably going to have to come to accept that it is either going to end or be transformed to the point where definition of the word "self" is getting pretty moot. I guess my point remains that post-death nonexistence contains absolute zero horrors in any case. In a weirdly aesthetic sense, the only possible perfect state is non-existence. To paraphrase Sophocles, perhaps the best thing is never to have been born at all. Now given a healthy love of life and a bit of optimism it feels best to soldier on, but to hope really to defeat death is a delusional escape from the mature acceptance of death. None of those people who now survive their bad teeth or infarctus have had their lives "saved" (an idiotic metaphor) merely prolonged. Now if that's what you want fine - but it strikes me as irrational as a way to deal with death itself.
Let's rephrase this with the troublesome terms unpacked as per the points you "largely agree" with: "to hope for a life measured in millenia is a delusional escape from the mature acceptance of a hundred-year lifespan".
In a nutshell: no! Hoping to see a hundred was not, in retrospect, a delusional escape from the mature acceptance of dying at fourty-something which was the lot of prehistoric humans. We don't know yet what changes in technology are going to make the next "normal" lifespan, but we know more about it than our ancestors did.
I can believe that it strikes you as weird, and I understand why it could be so. A claim that some argument is irrational is a stronger and less subjective claim. You need to substantiate it.
Your newly introduced arguments are: a) if you don't die you will be transformed beyond any current sense of identity, and b) "the only possible perfect state is non-existence". The latter I won't even claim to understand - given that you choose to continue this discussion rather than go jump off a tall building I can only assume your life isn't a quest for a "perfect state" in that sense.
As to the former, I don't really believe it. I'm reasonably certain I could live for millenia and still choose, for reasons that belong only to me, to hold on to some memories from (say) the year 2000 or so. Those memories are mine, no one else on this planet has them, and I have no reason to suppose that someone else would choose to falsely believe the memories are theirs.
I view identity as being, to a rough approximation, memories and plans. Someone who has (some of) my memories and shares (some of) my current plans, including plans for a long and fun-filled life, is someone I'd identify as "me" in a straightforward sense, roughly the same sense that I expect I'll be the same person in a year's time, or the same sense that makes it reasonable for me to consider plans for my retirement.
I'm not so sure that if it's possible to choose to keep specific memories, then it will be impossible to record and replay memories from one person to another. It might be a challenge to do so from one organic brain to another, it seems unlikely to be problematic between uploads of different people unless you get Robin Hanson's uneditable spaghetti code upolads.
There still might be some difference in experiencing the memory because different people would notice different things in it.
Perhaps "replay" memories has the wrong connotations - the image it evokes for me is that of a partly transparent overlay over my own memories, like a movie overlaid on top of another. That is too exact.
What I mean by keeping such memories is more like being able, if people ask me to tell them stories about what it was like back in 2010, to answer somewhat the same as I would now - updating to conform to the times and the audience.
This is an active process, not a passive one. Next year I'll say things like "last year when we were discussing memory on LW". In ten years I might say "back in 2010 there was this site called LessWrong, and I remember arguing this and that way about memory, but of course I've learned a few things since so I'd now say this other". In a thousand years perhaps I'd say "back in those times our conversations took place in plain text over Web browsers, and as we only approximately understood the mind, I had these strange ideas about 'memory' - to use a then-current word".
Keeping a memory is a lot like passing on a story you like. It changes in the retelling, though it remains recognizable.
Perhaps my discomfort with all this is in cryogenic's seeming affinity with the sort of fear mongering about death that's been the bread and butter of religion for millennia. It just takes it as a fundamental law of the universe that life is better than non life - not just in practice, not just in terms of our very real, human, animal desire to survive (which I share) - but in some sort of essential, objective, rational, blindingly obvious way. A way that smacks of dogma to my ears.
If you really want to live for millennia, go ahead. Who knows I might decide to join you. But in practice I think cryonics for many people is more a matter of escaping death, of putting our terrified, self-centered, hubristic fear of mortality at the disposal of another dubious enterprise.
As for my own view of "identity": I see it as a kind of metapattern, a largely fictional story we tell ourselves about the patterns of our experience as actors, minds and bodies. I can't quite bring myself to take it so seiously that I'm willing to invest in all kinds of extraordinary measures aimed at its survival. If I found myself desperately wanting to live for millennia, I'd probably just think "for chrissakes get over yourself".
Please, please, please don't let the distaste of a certain epistemic disposition interfere with a decision that has a very clear potential for vast sums of positive or negative utility. Argument should screen off that kind of perceived signaling. Maybe it's true that there is a legion of evil Randian cryonauts that only care about sucking every last bit out of their mortal lives because the Christian background they've almost but not quite forgotten raised them with an almost pitiable but mostly contemptible fear of death. Folks like you are much more enlightened and have read up on your Hofstadter and Buddhism and Epicureanism; you're offended that these death-fearing creatures that are so like you didn't put in the extra effort to go farther along the path of becoming wiser. But that shouldn't matter: if you kinda sorta like living (even if death would be okay too), and you can see how cryonics isn't magical and that it has at least a small chance of letting you live for a long time (long enough to decide if you want to keep living, at least!), then you don't have to refrain from duly considering those facts out of a desire to signal distaste for the seemingly bad epistemic or moral status of those who are also interested in cryonics and the way their preachings sound like the dogma of a forgotten faith. Not when your life probabilistically hangs in the balance.
(By the way, I'm not a cryonaut and don't intend to become one; I think there are strong arguments against cryonics, but I think the ones you've given are not good.)
Apply this argument to drug addiction: "I value not being an addict, but since post-addiction I will want to continue experiencing drugs, and I-who-doesn't-want-to-be-an-addict won't be around, I really don't see why I should stay away from becoming an addict". See the problem? Your preferences are about the whole world, with all of its past, present and future, including the time when you are dead. These preferences determine your current decisions; the preferences of future-you or of someone else are not what makes you make decisions at present.
I suppose I'd see your point if I believed that drug addiction was inevitable and knew that everyone in the history of everything had eventually become a drug addict. In short, I'm not sure the analogy is valid. Death is a special case, especially since "the time when you are dead" is from one's point of view not a "time" at all. It's something of an oxymoron. After death there IS no time - past present or future.
Whether something is inevitable is not an argument about its moral value. Have you read the reversal test reference?
Please believe in physics.
1) Who said anything about morality? I'm asking for a defence of the essential rationality of cryogenics. 2) Please read the whole paragraph and try to understand subjective point of view - or lack thereof post-death. (Which strikes me as the essential point of reference when talking about fear of death)
See What Do We Mean By "Rationality"?. When you ask about a decision, its rationality is defined by how well it allows to achieve your goals, and "moral value" refers to the way your goals evaluate specific options, with the options of higher "moral value" being the same as options preferred according to your goals.
Consider the subjective point of view of yourself-now, on the situation of yourself dying, or someone else dying for that matter, not the point of view of yourself-in-the-future or subjective point of view of someone-else. It's you-now that needs to make the decision, and rationality of whose decisions we discuss.
Clearly, I'm going to need to level up about this. I really would like to understand it in a satisfactory way; not just play a rhetorical game. That said the phrase "the situation of yourself dying" strikes me as an emotional ploy. The relevant (non)"situation" is complete subjective and objective non-existence, post death. The difficulty and pain etc of "dying" is not at issue here. I will read your suggestions and see if I can reconcile all this. Thanks.
This wasn't my intention. You can substitute that phrase with, say, "Consider the subjective point of view of yourself-now, on the situation of yourself being dead for a long time, or someone else being dead for a long time for that matter." The salient part was supposed to be the point of view, not what you look at from it.
I don't understand the big deal with this. Is it just selfishness? You don't care how good the world will be, unless you're there to enjoy it?
This post, like many others around this theme, revolves around the rationality of cryonics from the subjective standpoint of a potential cryopatient, and it seems to assume a certain set of circumstances for that patient: relatively young, healthy, functional in society.
I've been wondering for a while about the rationality of cryonics from a societal standpoint, as applied to potential cryopatients in significantly different circumstances; two categories specifically stand out, death row inmates and terminal patients.
This article cites the cost of a death row inmate (over serving a life sentence) to $90K. This is a case where we already allow that society may drastically curtail an individual's right to control their own destiny. It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.
As for terminal patients, this article says:
These costs are comparable to that charged for cryopreservation. It seems to me that it would be rational (as a cost reduction measure) to offer patients diagnosed with a likely terminal illness the voluntary option of being cryopreserved. At worst, if cryonics doesn't work, this amounts to an "assisted suicide", something that many progressive groups are already lobbying for.
Hm, I don't think that works -- the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right? If you want to suspend the inmate before those appeals then you've curtailed their right to put together a strong defence against being killed, and if you want to suspend the inmate after those appeals then you haven't actually saved any of that money.
.. or did I miss something?
Some of it is from more expensive incarceration, but you're right. This has one detailed breakdown:
However, we're assuming that with cryonics as an option the entire process would stay the same. That needn't be the case.
Also, depending upon advances in psychology, there could be the opportunity for real rehabilitation in the future. A remorseful criminal afraid they cannot change may prefer cryopreservation.
There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical, I don't mean to rain on anyone's parade, but I'm just stating the facts.
Bear in mind they don't just have to prove it will work. They also need to show you can be uploaded, reverse-aged, or whatever else that comes next. (Now awaiting hoards of flabbergasted replies and accusations.)
This is a very bad argument. First, all claims are probabilistic, so it isn't even clear what you mean by proof. Second of all, I could under the exact same logic say that one shouldn't try anything that involves technology that doesn't exist yet because we don't know if it will actually work. So the argument has to fail.
That's a widely acknowledged fact. And, if you make that your actual reason for rejecting cryonics, there are some implications that follow from that: for instance, that we should be investing massively more in research aiming to provide proof than we currently are.
The arguments we tend to hear are more along the lines of "it's not proven, it's an expensive eccentricity, it's morally wrong, and besides even if it were proved to work I don't believe I'd wake up as me so I wouldn't want it".
You're entitled to arguments, but not that particular proof.
I have no idea whether it will work, but right now, the only alternative is death. I actually think it's unlikely that people preserved now will ever be revived, more for social and economic reasons than technical ones.
How much do you believe it would cost?
In as much as I'm for cryopreservation (but am having some trouble finding a way to do it in Norway - well, I'll figure something out), I've also decided to be the kind of person who would, if still alive once reviving them becomes technically possible, pay for reviving as many as I can afford.
I tend to assume that other cryopreservationists think the same way. This means the chance of being revived, assuming nobody else wants to pay for it (including a possible FAI), is related to the proportion of cryopreservationists who are still alive divided by the cost of reviving someone, as a portion of their average income at the time.
Thus, I wonder - how costly will it be?
Once the infrastructure and technology for revival is established, it won't be very costly. The economic problem is getting that infrastructure and technology established in the first place.
I would guess you're far more altruistic than most people. Really, as many as you can afford?
It's not altruism, it's selfishness.
I'm precommiting myself to reviving others, if I have the opportunity; on the assumption that others do the same, this means the marginal benefit to me from signing up for cryopreservation goes up.
And, admittedly, I expect to have a considerable amount of disposable income. "As many as I can afford" means "While maintaining a reasonable standard of living", but "reasonable" is relative; by deliberately not increasing it too much from what I'm used to as a student, I can get more slack without really losing utilons.
It helps that my hobbies are, by and large, very cheap. Hiking and such. ;)
Anyone else here more interested in cloning than cryonics?
Seems 100x more feasible.
Re: "Anyone else here more interested in cloning than cryonics?"
Sure. Sexual reproduction is good too.
Interested in what way? Do you see it as a plausible substitute good from the perspective of your values?
Yes. If cloning were an option today, and I were forced to choose cloning vs. cryonics, I would choose the former.
What benefit do you see in having a clone of you?
I think by raising my own clone, I could produce a "more perfect" version of myself. He would have the same values, but an improved skill set and better life experiences.
Do you have any convincing reasons to believe that? How do you account for environmental differences?
You know what, I am quite content with a 50% faithful clone of myself. It is even possible that there is some useful stuff in that other 50%. </cheap shot>
What exactly would "choosing cloning" consist of?
More feasible yes, but not nearly as interesting a technology. What will cloning do? If we clone to make new organs then it is a helpful medical technique, one among many. If we are talking about reproductive cloning, then that individual has no closer identity to me than an identical twin (indeed a bit less since the clone won't share the same environment growing up). The other major advantage of cloning is that we could potentially use it to deliberately clone copies of smart people. But that's a pretty minor use, and fraught with its own ethical problems. And that would still take a long time to be useful. Let's say we get practical cloning tomorrow. Even if some smart person agreed to be cloned, we'd still need to wait around 12 years at very minimum before they can be that useful.
Cryonics is a much larger game changer than cloning.
Interesting post, but perhaps too much is being compressed into a single expression.
The niceness and weirdness factors of thinking about cryonics do not actually affect the correctness of cryonics itself. The correctness factor depends only on one's values and the weight of probability.
Not thinking one's own values through sufficiently enough to make an accurate evaluation is both irrational and a common failure mode. Miscalculating the probabilities is also a mistake, though perhaps more a mathematical error than a rationality error.
When these are the reasons for rejecting cryonics, then that rejection is obviously incorrect.
That said, you are quite correct to point out that differing values are not automatically a rationality failure, and it is definitely good to consider the image problem associated with the niceness issues.
Perhaps the niceness and weirdness ought to not be jumbled together with the correctness evaluation question.
On niceness, good point. On weirdness, I'm not sure what you mean; if you mean "weird stuff and ontological confusion", that is uncertainty about one's values and truths.
This comment is a more fleshed-out response to VladimirM’s comment.
Whether cryonics is the right choice depends on your values. There are suggestions that people who don’t think they value revival in the distant future are mislead about their real values. I think it might be the complete opposite: advocation of cryonics completely missing what it is that people value about their lives.
The reason for this mistake could be that cryonics is such a new idea that we are culturally a step or two behind in identifying what it is that we value about existence. So people think about cryonics a while and just conclude they don’t want to do it. (For example, the stories herein.) Why? We call this a ‘weirdness’ or ‘creep’ factor, but we haven’t identified the reason.
When someone values their life, what is it that they value? When we worry about dying, we worry about a variety of obligations unmet (values not optimized), and people we love abandoned. It seems to me that people are attached to a network of interactions (and value-responsibilities) in the immediate present. There is also an element of wanting more experience and more pleasure, and this may be what cryonics advocates are over-emphasizing. But after some reflection, how do you think most people would answer this question: when it comes to experiencing 5 minutes of pleasure, does it matter if it is you or someone else if neither of you remember it?
A lot of the desperation we feel when faced with death is probably a sense of responsibility for our immediate values. We are a bundle of volition that is directed towards shaping an immediate network of experience. I don't really care about anything 200 years from now, and enjoy the lack of responsibility I feel for the concerns I would have if I were revived then. As soon as I was revived, however, I know I would become a bundle of volition directed towards shaping that immediate network of experience.
Considering what we do value about life -- immediate connections, attachments and interactions, it makes much more sense to invest in figuring out technology to increase lifespan and prevent accidental death. Once the technology of cryonics is established, I think that there could be a healthy market for people undergoing cryonics in groups. (Not just signing up in groups, but choosing to be vitrified simultaneously in order to preserve a network of special importance to them.)
The 'we' population I was referring to was deliberately vague. I don't know how many people have values as described, or what fraction of people who have thought about cryonics and don't choose cryonics this would account for. My main point, all along, is that whether cryonics is the "correct" choice depends on your values.
Anti-cryonics "values" can sometimes be easily criticized as rationalizations or baseless religious objections. ('Death is natural', for example.) However, this doesn't mean that a person couldn't have true anti-cryonics values (even very similar-sounding ones).
Value-wise, I don't know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I'm pretty sure it's going to be the right choice for at least a handful and the wrong choice for at least a handful.
Roko:
This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today's developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion.
What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson's vision, but based on my speculation, not a reflection of his views.]
This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I'm not at all sure I'd like to wake up in such a world, even if rich -- and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they're frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today's human standards may well grow even more rapidly as the Malthusian scenario unfolds.
That said, I am nowhere near certain that bad future awaits us, nor that the above mentioned Malthusian scenario is inevitable. However, it does seem to me as the most plausible course of affairs given a cheap technology for making and copying minds, and it seems reasonable to expect that such technology would follow from more or less the same breakthroughs that would be necessary to revive people from cryonics.
That's a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal's wager with a minus sign.
That is true -- my comment was worded badly and open to misreading on this point. What I meant is that I agree with Hanson that ems likely imply a Malthusian scenario, but I'm skeptical of the feasibility of the investment strategy, unless it involves ditching the biological body altogether and identifying yourself with a future em, in which case you (or "you"?) might feasibly end up as a wealthy em. (From Hanson's writing I've seen, it isn't clear to me if he automatically assumes the latter, or if he actually believes that biological survival might be an option for prudent investors.)
The reason is that in a Malthusian world of cheap AIs, it seems to me that the prices of resources necessary to keep biological humans alive would far outrun any returns on investments, no matter how extraordinary they might be. Moreover, I'm also skeptical if humans could realistically expect their property rights to be respected in a Malthusian world populated by countless numbers of far more intelligent entities.
Roko:
This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you'll be able to afford (1+X/100)Y of it after that period. It merely means that if you're wealthy enough today to afford a particular well-defined basket of goods -- whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all -- then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction -- even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen.
Therefore, if you have peculiar needs and wants that don't correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It's enough that one factor critical for human survival gets bid up exorbitantly and it's adios amigos. I can easily think of more than one candidate.
From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent's worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second's worth of human thinking, while we toil 24/7, barely able to make ends meet. I don't know with how much confidence we should expect that property rights would be stable in such a situation.
[*] - To be precise, the contents of the basket will also change during that period if it's of any significant length. This however gets us into the nebulous realm of Fisher's chain indexes and similar numerological tricks on which the dubious edifice of macroeconomic statistics rests to a large degree.
I don't think it's a matter of whether you value your life but why. We don't value life unconditionally (say, just a metabolism, or just having consciousness -- both would be considered useless).
I wouldn't expect anyone to choose to die, no, but I would predict some people would be depressed if everyone they cared about died and would not be too concerned about whether they lived or not. [I'll add that the truth of this depends upon personality and generational age.]
Regarding the medieval peasant, I would expect her to accept the offer but I don't think she would be irrational for refusing. In fact, if she refused, I would just decide she was a very incurious person and she couldn't think of anything special to bring to the future (like her religion or a type of music she felt passionate about.) But I don't think lacking curiosity or any goals for the far impersonal future is having low self-esteem. [Later, I'm adding that if she decided not to take the offer, I would fear she was doing so due to a transient lack of goals. I would rather she had made her decision when all was well.]
(If it was free, I definitely would take the offer and feel like I had a great bargain. I wonder if I can estimate how much I would pay for a cryopreservation that was certain to work? I think $10 to $50 thousand, in the case of no one I knew coming with me, but it's difficult to estimate.)
I told Kenneth Storey, who studies various animals that can be frozen and thawed, about a new $60M government initiative (mentioned in Wired) to find ways of storing cells that don't destroy their RNA. He mentioned that he's now studying the Gray Mouse Lemur, which can go into a low-metabolism state at room temperature.
If the goal is to keep you alive for about 10 years while someone develops a cure for what you have, then this room-temperature low-metabolism hibernation may be easier than cryonics.
(Natural cryonics, BTW, is very different from liquid-nitrogen cryonics. There are animals that can be frozen and thawed; but most die if frozen to below -4C. IMHO natural cryonics will be much easier than liquid-nitrogen cryonics.)
You're trying to get to the truth of a different matter. You need to go one level meta. This post is arguing that either position is plausible. There's no need to refine the probabilities beyond saying something like "The expected reward/cost ratio of signing up for cryonics is somewhere between .1 and 10, including opportunity costs."
This is a valid point, but it is slightly OT to discuss precise probability for cryonics. I think that one reason people might not be trying to reach a consensus about the actual probability of success is because it may simply require so much background knowledge that one might need to be an expert to reasonably evaluate the subject. (Incidentally, I'm not aware of any sequence discussing what the proper thing to do is when one has to depend heavily on experts. We need more discussion of that.) The fact that there are genuine subject matter experts like de Magalhaes who have thought about this issue a lot and come to the conclusion that it is extremely unlikely while others who have thought about consider it likely makes it very hard to estimate. (Consider for example if someone asks me if string theory is correct. The most I'm going to be able to do is to shrug my shoulders. And I'm a mathematician. Some issues are just really much too complicated for non-experts to work out a reliable likelyhood estimate based on their own data.)
It might however be useful to start a subthread discussing pro and anti arguments. To keep the question narrow, I suggest that we simply focus on the technical feasibility question, not on the probability that a society would decide to revive people.
I'll start by listing a few:
For:
1) Non-brain animal organs have been successfully vitrified and revived. See e.g. here
2) Humans have been revived from low-oxygen, very cold circumstances with no apparent loss of memory. This has been duplicated in dogs and other small mammals in controlled conditions for upwards of two hours. (However the temperatures reduced are still above freezing).
Against:
1) Vitrification denatures and damages proteins. This may permanently damage neurons in a way that makes their information content not recoverable. If glial cells have a non-trivial role in thought then this issue becomes even more severe. There's a fair bit of circumstantial evidence for glial cells having some role in cognition, including the fact that they often behave abnormally in severe mental illness. See for example this paper discussing glial cells and schizophrenia. We also know that in some limited circumstances glial cells can release neurotransmitters.
2) Even today's vitrification procedures do not necessarily penetrate every brain cell, so there may be severe ice-crystal formation in a lot of neurons.
3) Acoustic fracturing is still a major issue. Since acoustic fracturing occurs even when one is just preserving the head, there's likely severe macroscopic brain damage occurring. This also likely can cause permanent damage to memory and other basic functions in a non-recoverable way. Moreover, acoustic fracturing is only the fracturing from cooling that is so bad that we hear it. There's likely a lot of much smaller fracturing going on. (No one seems to have put a sensitive microphone right near a body or a neuro when cooling. The results could be disconcerting).
Question for the advocates of cryonics: I have heard talk in the news and various places that organ donor organizations are talking about giving priority to people who have signed up to donate their organs. That is to say, if you sign up to be an organ donor, you are more likely to receive a donated organ from someone else should you need one. There is some logic in that in the absence of a market in organs; free riders have their priority reduced.
I have no idea if such an idea is politically feasible (and, let me be clear, I don't advocate it), however, were it to become law in your country, would that tilt the cost benefit analysis away from cryonics sufficiently that you would cancel your contract? (There is a new cost imposed by cryonics: namely that the procedure prevents you from being an organ donor, and consequently, reduces your chance of a life saving organ transplant.)
In most cases, signing up for cryonics and signing up as an organ donor are not mutually exclusive. The manner of death most suited to organ donation (rapid brain death with (parts of) the body still in good condition, generally caused by head trauma) is not well suited to cryonic preservation. You'd probably need a directive in case the two do conflict, but such a conflict is unlikely.
Alternatively, neuropreservation can, at least is theory, occur with organ donation.
The 15 year gain may be enough to get you over the tipping point where medicine can cure all your ails, which is to say, 15 years might buy you 1000 years.
I think you are being pretty optimistic if you think the probability of success of cryonics is 10%. Obviously, no one has any data to go on for this, so we can only guess. However, there is a lot of strikes against cryonics, especially so if only your head gets frozen. In the future will they be able to recreate a whole body from head only? In the future will your cryogenic company still be in business? If they go out of business does your frozen head have any rights? If technology is designed to restore you, will it be used? Will the government allow it to be used? Will you be one of the first guinea pigs to be tested, and be one of the inevitable failures? Will anyone want an old fuddy duddy from the far past to come back to life? In the interim has there been an accident, war, malicious action by eco terrorists, that unfroze your head? And so forth.
It seems to me that preserving actual life as long as possible is the best bet.
Thanks for this post. I tend to lurk, and I had some similar questions about the LW enthusiasm for cryo.
Here's something that puzzles me. Many people here, it seems to me, have the following preference order:
pay for my cryo > donation: x-risk reduction (through SIAI, FHI, or SENS) > paying for cryo for others
Of course, for the utilitarians among us, the question arises: why pay for my cryo over risk reduction? (If you just care about others way less than you care about yourself, fine.) Some answer by arguing that paying for your own cryo maximizes x-risk reduction better than the other alternatives because of its indirect effects. This reeks of wishful thinking and doesn't fit well with the preference order above. There are plenty of LWers, I assume, who haven't signed up for cryo, but would if someone else would pay the life insurance policy. If you really think that paying for your own cryo maximizes x-risk reduction, shouldn't you also think that getting others signed up for cryo does as well? (There are some differences, sure. Maybe the indirect effects aren't as substantial if others don't pay their own way in full. But I doubt this justifies the preference.) If so, it would seem that rather than funding x-risk reduction through donating to these organizations, you should fund the cryo preservation of LWers and other willing people.
So which is it utilitarians: you shouldn't pay for your own cryo or you should be working on paying for the cryo of others as well?
If you think paying for cryo is better, want to pay for mine first?
I care more about myself than about others. This is what would be expected from evolution and - frankly - I see no need to alter it. Well, I wouldn't.
I suspect that many people who claim they don't are mistaken, as the above preference ordering seems to illustrate. Maximize utility, yes; but utility is a subjective function, as my utility function makes explicit reference to myself.
I'm not sure if this is the right place to ask this or even if it is possible to procure the data regarding the same, but who is the highest status person who has opted for Cryonics? The wealthiest or the most famous..
Having high status persons adopt cryonics can be a huge boost to the cause, right?
Uhhh... no. People developed the Urban legend about Walt Disney for a reason. It's easy to take rich, creative, ingenious, successful people and portray them as eccentric, isolated and out of touch.
Think about the dissonance between "How crazy those Scientologists are" and "How successful those celebrities are." We don't want to create a similar dissonance with cryonics.
It depends on the celebrity. Michael Jackson, not so helpful. But Oprah would be.
It certainly boosts publicity, but most of the people I know of who have signed up for cryonics are either various sorts of transhumanists or celebrities. The celebrities generally seem to do it for publicity or as a status symbol. From the reactions I've gotten telling people about cryonics, I feel it has been mostly a negative social impact. I say this not because people I meet are creeped out by cryonics, but because they specifically mention various celebrities. I think if more scientists or doctors (basically, experts) opted for cryonics it might add credibility. I can only assume that lack of customers for companies like Alcor decreases the chance of surviving cryonics.
I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don't place a large value on there being a "Violet" in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible "fixing" of brain is very high priority 5) Thus I don't want to be revived by far-future and death without cryonics seems a safe way for that
What makes you sure of this?
Here's another possible objection to cryonics:
If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.
"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:
Suppose the Singularity develops from an AI that was initially based on a human upload. When it becomes clear that there is a real possibility of uploading and gaining immortality in some sense, many people will compete for upload slots. The winners will likely be the rich and powerful. Billionaires tend not to be known for their public-spirited natures - in general, they lobby to reorder society for their benefit and to the detriment of the rest of us. So, the core of the AI is likely to be someone ruthless and maybe even frankly sociopathic.
Imagine being revived into a world controlled by a massively overclocked Dick Cheney or Vladimir Putin or Marquis De Sade. You might well envy the dead.
Unless you are certain that no Singularity will occur before cryonics patients can be revived, or that Friendly AI will be developed and enforced before the Singularity, cryonics might be a ticket to Hell.
What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?
"Where in this code do I need to put this "-ve" sign again?"
The two are approximately equal in difficulty, assuming equivalent flexibility in how "Evil" or "Friendly" it would have to be to qualify for the definition.
An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism.
I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI.
Why do you think that an evil AI would be harder to achieve than a Friendly one?
Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured.
Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever.
I have to admit I haven't thought much about this, though.
Paperclipping is a relatively simple failure. The difference between paperclipping and evil is mainly just that - a matter of complexity. Evil is complex, turning the universe into tuna is decidedly not.
On the scale of friendliness, I ironically see an "evil" failure (meaning, among other things, that we're still in some sense around to notice it being evil) becoming more likely as friendliness increases. As we try to implement our own values, failures become more complex, and less likely to be total - thus letting us stick around to see them.
I'm surprised that you didn't bring up what I find to be a fairly obvious problem with Cryonics: what if nobody feels like unthawing you? Of course, not having followed this dialogue I'm probably missing some equally obvious counter to this argument.
If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down.
It sounds like Pascal's Wager (small chance at success, potentially infinite payoff), but it doesn't fail for the same reasons Pascal's Wager does (Pascal's gambit for one religion would work just as well for any other one.) - discussed here a while back.
I have been heavily leaning towards the anti-cryonics stance at least for myself with the current state of information and technology. My reasons are mostly the following.
I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options. (a) Have my life support system turned off and die peacefully.
(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence. In this case I would very likely choose (a).
Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.
Last but not the least is a point I made as a comment in response to Robin Hanson's post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to "be born" who would not have come into existence. And that this was for some reason a good thing.
I couldn't care less if I weren't born. As the saying goes, I have been dead/not existed for billions of years and haven't suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above. This is probably related to the sense of identity mentioned in this post.
We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don't think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing. Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.
Hi, I'm pretty new here too. I hope I'm not repeating an old argument, but suspect I am; feel free to answer with a pointer instead of a direct rebuttal.
I'm surprised that no-one's mentioned the cost of cryonics in relation to the reduction in net human suffering that could come from spending the money on poverty relief instead. For (say) USD $50k, I could save around 100 lives ($500/life is a current rough estimate at lifesaving aid for people in extreme poverty), or could dramatically increase the quality of life of 1000 people (for example, cataract operations to restore sight to a blind person are around $50).
How can we say it's moral to value such a long shot at elongating my own life as being worth more than 100-1000 lives of other humans who happened to do worse in the birth wealth lottery than I did?
One can expect to live a life at least 100-1000 times longer than those other poor people, or live a life that has at least 100-1000 times as much positive utility, as well as the points in the other comments.
Although this argument is a decent one for some people, it's much more often the product of motivated cognition than carefully looking at the issues, so I did not include it in the post.
Thanks for the reply.
.. when you say "can expect to", what do you mean? Do you mean "it is extremely likely that.."? That's the problem. If it was a sure deal, it would be logical to spend the money on it -- but in fact it's extremely uncertain, whereas the $50 being asked for by a group like Aravind Eye Hospital to directly fund a cataract operation is (close to) relieving significant suffering with a probability of 1.
This is also an argument against going to movies, buying coffee, owning a car, or having a child. In fact, this is an argument against doing anything beyond living at the absolute minimum threshold of life, while donating the rest of your income to charity.
How can you say it's moral to value your own comfort as being worth more than 100-1000 other humans? They just did worse at the birth lottery, right?
It's not really an argument against those other things, although I do indeed try to avoid some luxuries, or to match the amount I spend on them with a donation to an effective aid organization.
What I think you've missed is that many of the items you mention are essential for me to continue having and being motivated in a job that pays me well -- well enough to make donations to aid organizations that accomplish far more than I could if I just took a plane to a place of extreme poverty and attempted to help using my own skills directly.
If there's a better way to help alleviate poverty than donating a percentage of my developed-world salary to effective charities every year, I haven't found it yet.
Ah, I see. So when you spend money on yourself, it's just to motivate yourself for more charitable labor. But when those weird cryonauts spend money on themselves, they're being selfish!
How wonderful to be you.
No, I'm arguing that it would be selfish for me to spend money on myself, if that money was on cryonics, where selfishness is defined as (a) spending an amount of money that could relieve a great amount of suffering, (b) on something that doesn't relate to retaining my ability to get a paycheck.
One weakness in this argument is that there could be a person who is so fearful of death that they can't live effectively without the comfort that signing up for cryonics gives them. In that circumstance, I couldn't use this criticism.
Cryonics is comparable to CPR or other emergency medical care, in that it gives you extra life after you might otherwise die. Of course it's selfish, in the sense that you're taking care of yourself first, to spend money on your medical care, but cryonics does relate to your ability to get a paycheck (after your revival).
To be consistent, are you reducing your medical expenses in other ways?
.. at a probability of (for the sake of argument) one in a million.
Do I participate in other examples of medical care that might save my life with probability one in a million (even if they don't cost any money)? No, not that I can think of.
Did you ever get any vaccination shots? Some of these are for diseases that have become quite rare.
That's true. I didn't spend my own money on them (I grew up in the UK), and they didn't cost very much in comparison, but I agree that it's a good example of a medical long shot.
Yep, the cost and especially the administrative hassles are, in comparison to the probability considerations, closer to the true reason I (for instance) am not signed up yet, in spite of seeing it as my best shot of insuring long life.
To be fair, vaccination is also a long shot in terms of frequency, but definitely proven to work with close to certainty on any given patient. Cryonics is a long shot intrisically.
But it might not be if more was invested in researching it, and more might be invested if cryonics was already used on a precautionary basis in situations where it would also save money (e.g. death row inmates and terminal patients) and risk nothing of significance (since no better outcome than death can be expected).
In that sense it seems obviously rational to advocate cryonics as a method of assisted suicide, and only the "weirdness factor", religious-moralistic hangups and legislative inertia can explain the reluctance to adopt it more broadly.
like this: I value my subjective experience more than even hundreds of thousands of other similar-but-not-me subjective experiences.
additionally, your argument applies to generic goods you choose over saving people, not just cryonics.
Well, sure, but I asked how it could be moral, not how you can evade the question by deciding that you don't have any responsibilities to anyone.
what are morals? I have preferences. sometimes they coincide with other people's preferences and sometimes they conflict. when they conflict In socially unacceptable ways I seek ways to hide or downplay them.
I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, in which you'll otherwise probably be dead and gone, some entity will exist with which it is rational to identify to the point where you consider it, for the purposes of your present decisions, to be the same as your "normal" self that you expect to be alive tomorrow.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much. What I find to be the logical conclusion of these arguments is that the notion of personal identity is fundamentally a mere subjective feeling, where no objective or rational procedure can be used to determine the right answer. Therefore, if we accept these arguments, there is no reason at all to berate as irrational people who don't feel any identification with these entities that cryonics would (hopefully) make it possible to summon into existence in the future.
In particular, I personally can't bring myself to feel any identification whatsoever with some computer program that runs a simulation of my brain, no matter how accurate, and no matter how closely isomorphic its data structures might be to the state of my brain at any point in time. And believe me, I have studied all the arguments for the contrary position I could find here and elsewhere very carefully, and giving my utmost to eliminate any prejudice. (I am more ambivalent about my hypothetical thawed and nanotechnologically revived corpse.) Therefore, in at least some cases, I'm sure that people reject cryonics not because they're too biased to assess the arguments in favor of it, but because they honestly feel no identification with the future entities that it aims to produce -- and I don't see how this different subjective preference can be considered "irrational" in any way.
That said, I am fully aware that these and other anti-cryonics arguments are often used as mere rationalizations for people's strong instinctive reactions triggered by the weirdness/yuckiness heuristics. Still, they seem valid to me.
Well, they say that cryonics works whether you believe in it or not. Why don't give it a try?
Well said.
I think this is true. Cryonics being the "correct choice" doesn't just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can't imagine people not valuing life extension in this way.
I wouldn't pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)
Roko:
It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change.
I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I've already thought about could perhaps have a different effect on me than I presently expect it would.
For full disclosure, I should add that I see some deeper problems with the simulation argument that I don't think are addressed in a satisfactory manner in the treatments of the subject I've seen so far, but that's a whole different can of worms.
That would fall under the "evidence that I've already thought about" mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can't even imagine. However, ultimately, I think I would be led to conclude that the whole concept of "oneself" is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as "one's future self" is just a subjective whim. (See also my replies to kodos96 in this thread.)
Would it change your mind if that computer program [claimed to] strongly identify with you?
I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me.
I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self.
Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones.
Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time?
Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.
While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn't identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning.
Your point about weirdness signaling is a good one, and I'd expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.
JoshuaZ:
The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject's identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan's principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point.
In any case, I honestly don't see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.
This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.
I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other.
Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own.
Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point.
But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing.
Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) will be very similar to your brain today in most but not all respects. To the degree that you make good use of your time now, your brain then is actually likely to implement someone more like your ideal self than even you yourself are right now... unless you have no actualized desire for self improvement. The only deep change between now and then is that you will have momentarily lost "continuity of awareness" in the middle because your brain will go into a repair and update mode that's not capable of sensing your environment or continuing to compute "continuity of awareness".
If your formal theory of reality started with Hume and broke down before reaching these conclusions then you are, from the perspective of pragmatic philosophy, still learning to crawl. This is basically the same thing as babies learning about object permanence except in a more abstract context.
Barring legitimate pragmatic issues like discount rates, your future self should be more important to you than your present self, unless you're mostly focused on your "contextual value" (the quality of your relationships and interactions with the broader world) and feel that your contextual value is high now and inevitably declining (or perhaps will be necessarily harmed by making plans for cryonics).
The real thing to which you should be paying attention (other than to make sure they don't stop working) is not the mechanisms by which mental content is stored, modified, and transmitted into the future. The thing you should be paying attention to is the quality of that content and how it functionally relates to the rest of the physical universe.
For the record, I don't have a cryonics policy either, but I regard this as a matter of a failure to conscientiously apply myself to executing on an issue that is obviously important. Once I realized the flaw in my character that lead to this state of affairs I began working to fix it, which is something that, for me, is still a work in progress.
Part of my work is analyzing the issue enough to have a strongly defensible, coherent, and pragmatic argument for cryonics which I'll consider to have been fully resolved either (1) once I have argument for not signing up that would be good enough for a person able to reason in a relatively universal manner or (2) I have a solid argument the other way which has lead me and everyone I care about including my family and close friends to have taken the necessary steps and signed ourselves up.
When I set up a "drake equation for cryonics" and filled in the probabilities under optimistic (inside view) calculations I determined the value to be trillions of dollars. Under pessimistic assumptions (roughly, the outside view) I found that the expected value was epsilon and realized that my model was flawed because it didn't even have terms for negative value outcomes like "loss of value in 'some other context' because of cryonics/simulationist interactions".
So, pretty much, I regard the value of information here as being enormously large, and once I refine my models some more I expect to have a good idea as to what I really should do as a selfish matter of securing adequate health care for me and my family and friends. Then I will do it.
I'm in the signing process right now, and I wanted to comment on the "work in progress" aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true.
The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy).
So yeah, the term "working on it" is not correctly applicable to this situation. Someone who's never climbed a flight of stairs may work out for months in preparation, but they really don't need to, and afterwards might be somewhat annoyed that no one who'd climbed stairs before had bothered to tell them so.
Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked "What's CI?" that it's a place that'll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact - on a daily basis - with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don't die, and realize that you never would have.
The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life.
SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don't currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up.
Secondarily, there are similarly complex social issues that come up because I'm married, love my family, am able to have philosophical conversations them, and don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be?
Finally, I've worked on a personal version of a "drake equation for cryonics" and it honestly wasn't a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family - given that they are generally rational in their own personal ways :-)
Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may "stick" for the next forty years. Against this I weigh the issue of "best being the enemy of good" because (in point of fact) I'm not safe in any way at all right now... which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that "sloppiness calibration" over to the rest of my life?
So, yeah, its a work in progress.
I'm pretty much not afraid of the social issues that you brought up. If people who disagree with me about the state of the world want to judge me, that's their problem up until they start trying to sanction me or spread malicious gossip that blocks other avenues of self improvement or success. The judgment of strangers who I'll never see again is mostly a practical issue and not that relevant compared to relationships that really matter, like those with my husband, nuclear family, friends, personal physician, and so on.
Back in 1999 I examined these issues. In 2004 I got to the point of having all the paperwork to sign and turn in with Alcor and Insurance, with all costs pre-specified. In each case I backed off because I calculated the costs and looked at my income and looked at the things I'd need to cut out of my life (and none of it was coffee from starbucks or philanthropy or other fluffy BS like that - it was more like the simple quality of my food and whether I'd be able to afford one bedroom vs half a bedroom) and they honestly didn't seem to be worth it. As I've gotten older and richer and more influential (and partly due to influence from this community) I've decided I should review the decision again.
The hard part for me is dotting the i's and crossing the t's (and trying to figure out where its safe to skip some of these steps) while seeking to minimize future regrets and maximize positive outcomes.
You can't hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can't view yourself as the primary source for their actions.
It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening.
The question is: to what degree is failing to sign up for cryonics like suicide by negligence?
Disagree. What's this trivially easy part? You can't buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you'll have to get a medical exam, but still...)
Of course, in fairness, I'm trying to combine it with "infinite banking" by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don't want to limit the policy to a specific term, risking that you'll die afterward and no be able to afford the preservation, when the take-off hasn't happened.)
Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you'll end up way ahead.
It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.
Yes, I'm intimately familiar with the argument. And while I'm not committed to whole life, this particular point is extremely unpersuasive to me.
For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost.
That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free.
If you "buy term and invest the difference", you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you're ~60. The optimistic "long term" returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in '08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds -- and especially not after taxes.
Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries' fiscal situations, is looking more likely every day), they'll most likely be hit before life insurance policies.
So yes, I'm aware of the argument, but there's a lot about the calculation that people miss.
I'm not finding this. Can you refer me to your trivially easy agency?
I used State Farm, because I've had car insurance with them since I could drive, and renters/owner's insurance since I moved out on my own. I had discounts both for multi-line and loyalty.
Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax.
Heck, my agent had to do much more work than I did, previous to this she didn't know that you can designate someone other than yourself as the owner of the policy, required some training.
I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn't complete it for me. That was spooky. I don't want to do that.
Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time.
We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are.
Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values.
If a proposed intrinsic value is questioned and justified with another value statement, then the supposed "intrinsic value" is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of "simple" fact. And you are quite right that these facts (by definition as "non value statements") will not be motivating.
We fundamentally like vanilla (if we do) "because we like vanilla" as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P
On the other hand... basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create "flow" where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on.
As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that "I don't care what I will experience tomorrow" can be interpreted as a prediction that "Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions". This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professional help if you want to live very much longer (which of course you might not if you're really experiencing anhedonia), or else you are a non human intelligence and I have to start from scratch trying to figure you out.
Taking it as granted that you and I can both safely predict that you will continue to enjoy life tomorrow... then an inductive proof can be developed that "unless something important changes from one day to the next" you will continue to have a stake in the day after that, and the day after that, and so on. When people normally discuss cryonics and long term values it is the "something important changing" issue that they bring up.
For example, many people think that they only care about their children... until they start seeing their grandchildren as real human beings whose happiness they have a stake in, and in whose lives they might be productively involved.
Other people can't (yet) imagine not falling prey to senescence, and legitimately think that death might be preferable to a life filled with pain which imposes costs (and no real benefits) on their loved ones who would care for them. In this case the critical insight is that not just death but also physical decline can be thought of as a potentially treatable condition and so we can stipulate not just vastly extended life but vastly extended youth.
But you are not making any of these points so that they can even be objected to by myself or others... You're deploying the kind of arguments I would expect from an undergrad philosophy major engaged in motivated cognition because you have not yet "learned how to lose an argument gracefully and become smarter by doing so".
And it is for this reason that I stand by the conclusion that in some cases beliefs about cryonics say very much about the level of pragmatic philosophical sophistication (or "rationality") that a person has cultivated up to the point when they stake out one of the more "normal" anti-cryonics positions. In your case, you are failing in a way I find particularly tragic, because normal people raise much better objections than you are raising - issues that really address the meat of the matter. You, on the other hand, are raising little more than philosophical confusion in defense of your position :-(
Again, I intend these statements only in the hope that they help you and/or audiences who may be silently identifying with your position. Most people make bad arguments sometimes and that doesn't make them bad people - in fact, it helps them get stronger and learn more. You are a good and valuable person even if you have made comments here that reveal less depth of thinking than might be hypothetically possible.
That you are persisting in your position is a good sign, because you're clearly already pretty deep into the cultivation of rationality (your arguments clearly borrow a lot from previous study) to the point that you may harm yourself if you don't push through to the point where your rationality starts paying dividends. Continued discussion is good practice for this.
On the other hand, I have limited time and limited resources and I can't afford to spend any more on this line of conversation. I wish you good luck on your journey, perhaps one day in the very far future we will meet again for conversation, and memory of this interaction will provide a bit of amusement at how hopelessly naive we both were in our misspent "childhood" :-)
Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.)
In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self.
In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone.
Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html
He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.
This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you've signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we're talking about insurance policies.
Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I'm going to be blunt here, all arguments against the feasibility of a singularity that I've seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn't change whether or not an AGI in such a universe looking at its source code can go FOOM.
Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity.
With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.
My understanding was in policies like Roko was describing you're not paying year by year, you're paying for a lifetime thing where in the early years you're mostly paying for the rate not to go up in later years. Is this inaccurate? If it's year by year, $1/day seems expensive on a per life basis given that the population-wide rate of death is something like 1 in 1000 for young people, probably much less for LWers and much less still if you only count the ones leaving preservable brains.
I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn't all that important when you're talking about multiverse-affecting technologies; no, really, I'm not sure 5% of my measure is worth having to give up half a Hershey's bar everyday, when we're talking crazy post-singularity decision theoretic scenarios from one of Escher's worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.
Wut. Is this a quantum immortality thing?
No, unfortunately, much more complicated and much more fuzzy. Unfortunately it's a Pascalian thing. Basically, if post-singularity (or pre-singularity if I got insanely lucky for some reason - in which case this point becomes a lot more feasible) I get access to infinite computing power, it doesn't matter how much of my measure gets through, because I'll be able to take over any 'branches' I could have been able to reach with my measure otherwise. This relies on some horribly twisted ideas in cosmology / game theory / decision theory that will, once again, not fit in the margin. Outside view, it's over a 99% chance these ideas totally wrong, or 'not even wrong'.
How serious 0-10, and what's a decision theoretic zombie?
A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer's idea of optimization power. This is all probably stupid and wrong but it's interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).)
I'm going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to 'normality', whatever your average rationalist thinks 'normality' is. Some of the implications of decision theory really are legitimately weird.
What do you mean by decision theoretic and information theoretic measure? You don't come across as ontologically fundamental IRL.
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn't care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one's measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, 'probability as preference', and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I'm trying to convey?
"You don't come across as ontologically fundamental IRL." Ha, I was kind of trolling there, but something along the lines of 'I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse'. It's one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one's philosophical foot off. I don't take any of my ideas too seriously, but collectively, I feel like they're representative of a confusion that not only I have.
If you were really the only non-zombie in a Fun Theory universe then you would be the AGI going FOOM. What could be funner than that?
Yeah, that seems like a necessary plot point, but I think it'd be more fun to have a challenge first. I feel like the main character(s) should experience the human condition or whatever before they get a taste of true power, or else they'd be corrupted. First they gotta find something to protect. A classic story of humble beginnings.
Agreed. Funnest scenario is experiencing the human condition, then being the first upload to go FOOM. The psychological mind games of a transcending human. Understanding fully the triviality of human emotions that once defined you, while at the same moment modifying your own soul in an attempt to grasp onto your lingering sanity, knowing full well that the fate of the universe and billions of lives rests on the balance. Sounds like a hell of a rollercoaster.
Not necessarily. Someone may for example put a very high confidence in an upcoming technological singularity but put a very low confidence on some other technologies. To use one obvious example, it is easy to see how someone would estimate the chance of a singularity in the near future to be much higher than the chance that we will have room temperature superconductors. And you could easily assign a high confidence to one estimate for one technology and not a high confidence in your estimate for another. (Thus for example, a solid state physicist might be much more confident in their estimate for the superconductors). I'm not sure what estimates one would use to reach this class of conclusion with cryonics and the singularity, but at first glance this is a consistent approach.
Right, but if it fits minimal logical consistency it means that there's some thinking that needs to go on. And having slept on this I can now give other plausible scenarios for someone to have this sort of position. If for example, someone puts a a high probability on a coming singularity, but they put a low probability that effective nanotech will ever be good enough to restore brain function.For example, If you believe that the vitrification procedure damages neurons in fashion that is likely to permanently erases memory, then this sort of attitude would make sense.
Reason #7 not to sign up: There is a significant chance that you will suffer information-theoretic death before your brain can be subjected to the preservation process. Your brain could be destroyed by whatever it is that causes you to die (such as a head injury or massive stroke) or you could succumb to age-related dementia before the rest of your body stops functioning.
In regards to dementia, it isn't at all clear that that will necessarily lead to information-theoretic death. We don't have a good enough understanding of dementia to know if the information is genuinely lost or just difficult to recover. The fact that many forms of dementia have more or less lucid periods and periods where they can remember who people are and other times where they cannot is all tentative evidence that the information is recoverable.
Also, this argument isn't that strong an argument. This isn't going to be substantially altering whether or not it makes sense to sign up by more than probably an order of magnitude at the very most (relying on chance of violent death and chance that one will have dementia late in life).
Reason #6 not to sign up: Cryonics is not compatible with organ donation. If you get frozen, you can't be an organ donor.
Alternatively, that's a good reason not to sign up for organ donation. Organ donation won't increase my well-being or happiness any, while cryonics might.
In addition, there's the problem that being an organ donor creates perverse incentives for your death.
You get no happiness knowing there is a decent chance your death could save the lives of others?
Would you turn down a donated organ if you needed one?
It's a nice thought, I guess, but I'd rather not die in the first place. And any happiness I might get from that is balanced out by the risks of organ donation: cryonic preservation becomes slightly less likely, and my death becomes slightly more likely (perverse incentives). If people benefit from my death, they have less of an incentive to make sure I don't die.
No. But I'd vote to make post-death organ donation illegal, and I'd encourage people not to donate their organs after they die. (I don't see a problem with donating a kidney while you're still alive.)
Well I understand that you will be so much more happy if you avoid death for the foreseeable future that cryonics outweighs organ donation. I'm just saying that the happiness from organ donation can't be zero.
The incentives seem to me so tiny as to be a laughable concern. I presume you're talking about doctors not treating you as effectively because they want your organs? Do you have this argument further developed elsewhere? It seems to me a doctor's aversion to letting someone die, fear of malpractice lawsuits and ethics boards are more than sufficient to counter whatever benefit they would get from your organs (which would be what precisely?). Like I would be more worried about the doctors not liking me or thinking I was weird because I wanted to be frozen and not working as hard to save me because of that. (ETA: If you're right there should be studies saying as much.)
It seems to me legislation to punish defectors in this cooperative action problem would make sense. Organ donors should go to the top of the implant lists if they don't already. Am I right that appealing to your sense of justice regarding your defection would be a waste of time?
If your arguments are right I can see how it would be a bad individual choice to be a organ donor (at least if you were signed up for cryonics). But those arguments don't at all entail that banning post-death organ donation would be the best public policy, especially since very few people will sign up for cryonics in the near future. Do you think that the perverse incentives lead to more deaths than the organs save?
And from a public interest perspective an organ donor is more valuable than a frozen head. It might be in the public interest to have some representatives from our generation in the future but there is a huge economic cost to losing 20 years of work from an experienced and trained employee-- a cost which is mediated little by the economic value of a revived cryonics patient who would likely have no marketable skills for his time period. So the social benefit to people signing up for cryonics diminishes rapidly.
There was a short discussion previously about how cryonics is most useful in cases of degenerative diseases, whereas organ donation is most successful in cases of quick deaths such as due to car accidents; which is to say that cryonics and organ donation are not necessarily mutually exclusive preparations because they may emerge from mutually exclusive deaths.
Though maybe not, which is why I had asked about organ donation in the first place.
Is that true in general, or only for organizations that insist on full-body cryo?
AFACT (from reading a few cryonics websites), it seems to be true in general, but the circumstances under which your brain can be successfully cryopreserved tend to be ones that make you not suitable for being an organ donor anyway.
Could you elaborate on that? Is cryonic suspension inherently incompatible with organ donation, even when you are going with the neuro option or does the incompatibility stem from current obscurity of cryonics? I imagine that organ harvesting could be combined with early stages of cryonic suspension if the latter was more widely practiced.
The cause of death of people suitable to be organ donors is usually head trauma.
Reason #5 to not sign up: Because life sucks.
Huh, I think I may have messed up, because (whether I should admit it or not is unclear to me) I was thinking of you specifically when I wrote the second half of reason 4. Did I not adequately describe your position there?
You came pretty close.
Thus triggering the common irrational inference, "If something is attacked with many spurious arguments, especially by religious people, it is probably true."
(It is probably more subtle than this - When you make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)
Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)
I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.
Yes. This is discussed well in the comments on What Evidence Filtered Evidence?.
No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren't even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence).
This isn't confined to religion; it's a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with.
Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.
I don't think many religious people spend "tremendous effort" arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren't prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter's arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death).
But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death.
I don't think however that this issue can be exclusively explained by Christianity, since I've encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn't hold too much ground given how much neopaganism seems to be a reaction against Christianity).
All I mean to say is this: Suppose you say, "100 people have made arguments against proposition X, and all of them were bad arguments; therefore the probability of finding a good argument against X is some (monotonic increasing) function of 1/100."
If X is a proposition that is particularly important to people in cult C because they believe something very strange related to X, and 90 of those 100 arguments were made by people in cult C, then you should believe that the probability of finding a good argument against X is a function of something between 1/10 and 1/100.
This problem is endemic in the affirmative atheism community. It's a sort of Imaginary Positions error.