Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.
Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."
Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"
Hibernation will sell people on the idea that fridges sa...
Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.
I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.
...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.
An interesting comparison I mentioned previously: the cost to Alcor of preserving one human (full-body) is $150,000. The recent full annual budget of SIAI is on the order of (edit:) $500,000.
Hi all,
I was completely wrong on my budget estimate, I apologize. I wasn't including the Summit, and I was just estimating the cost on my understanding of salaries + misc. expenses. I should have checked Guidestar. My view of the budget also seems to have been slightly skewed because I frequently check the SIAI Paypal account, which many people use to donate, but I never see the incoming checks, which are rarer but sometimes make up a large portion of total donations. My underestimate of money in contributing to my underestimating monies out.
Again, I'm sorry, I was not lying, just a little confused and a few years out of date on my estimate. I will search over my blog to modify any incorrect numbers I can find.
I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.
First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)
Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, i...
I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah
In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.
Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.
Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.
No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.
I think cryonics is used as a rationality test because most people reason about it from within the mental category "weird far-future stuff". The arguments in the post seem like appropriate justifications for choices within that category. The rationality test is whether you can compensate for your anti-weirdness bias and realize that cryonics is actually a more logical fit for the mental category "health care".
This post, like many others around this theme, revolves around the rationality of cryonics from the subjective standpoint of a potential cryopatient, and it seems to assume a certain set of circumstances for that patient: relatively young, healthy, functional in society.
I've been wondering for a while about the rationality of cryonics from a societal standpoint, as applied to potential cryopatients in significantly different circumstances; two categories specifically stand out, death row inmates and terminal patients.
This article cites the cost of a death row inmate (over serving a life sentence) to $90K. This is a case where we already allow that society may drastically curtail an individual's right to control their own destiny. It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.
As for terminal patients, this article says:
...Aggressive treatments attempting to prolong life in terminally ill people typically continue far too long. Reflecting this overaggressive end-of-life treatment, the Health Care Finance Administration reported that abou
It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.
Hm, I don't think that works -- the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right? If you want to suspend the inmate before those appeals then you've curtailed their right to put together a strong defence against being killed, and if you want to suspend the inmate after those appeals then you haven't actually saved any of that money.
.. or did I miss something?
This comment is a more fleshed-out response to VladimirM’s comment.
This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.
Whether cryonics is the right choice depends on your values. There are suggestions that people who don’t think they value revival in the distant future are mislead about their real values. I think it might be the complete opposite: advocation of cryonics completely missing what it is that people value about their lives.
The reason for this mistake could be that cryonics is such a new idea that we are culturally a step or two behind in identifying what it is that we value about existence. So people think about cryonics a while and just conclude they don’t want to do it. (For example, the stories herein.) Why? We call this a ‘weirdness’ or ‘creep’ factor, but we haven’t identified the reason.
When someone values their life, what is it that they value? When we worry about dying, we wor...
Reason #6 not to sign up: Cryonics is not compatible with organ donation. If you get frozen, you can't be an organ donor.
The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong.
Thus triggering the common irrational inference, "If something is attacked with many spurious arguments, especially by religious people, it is probably true."
(It is probably more subtle than this - When you make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)
Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)
I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.
I told Kenneth Storey, who studies various animals that can be frozen and thawed, about a new $60M government initiative (mentioned in Wired) to find ways of storing cells that don't destroy their RNA. He mentioned that he's now studying the Gray Mouse Lemur, which can go into a low-metabolism state at room temperature.
If the goal is to keep you alive for about 10 years while someone develops a cure for what you have, then this room-temperature low-metabolism hibernation may be easier than cryonics.
(Natural cryonics, BTW, is very different from liquid-nit...
I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).
"Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are."
This argument basically reduces to, once you remove t...
I don't know if this is a self-defense mechanism or actually related to the motives of those promoting cryonics in this group, but I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement. If the intent is to remind me that things I do may later turn out to be not just wrong, but extremely wrong, it works pretty well.
It's a good topic to explore agreement theory, as different declared-intended-rationalists have different conclusions, and can talk somewhat dispassionately about such disagreement...
I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement.
I hadn't thought of this, but if so, it's dangerous rhetoric and just begging to be misunderstood.
On a side note, speaking of "abnormal" and cryonics, apparently Britney Spears wants to sign up with Alcor: http://www.thaindian.com/newsportal/entertainment/britney-spears-wants-to-be-frozen-after-death_100369339.html
I think this can be filed under "any publicity is good publicity".
It's not obvious that this would be good: it could very well make existential risks research appear less credible to the relevant people (current or future scientists).
I'm not sure if this is the right place to ask this or even if it is possible to procure the data regarding the same, but who is the highest status person who has opted for Cryonics? The wealthiest or the most famous..
Having high status persons adopt cryonics can be a huge boost to the cause, right?
Probably my biggest concern with cryonics is that if I was to die at my age (25), it would probably be in a way where I would be highly unlikely to be preserved before a large amount of decay had already occurred. If there was a law in this country (Australia) mandating immediate cryopreservation of the head for those contracted, I'd be much more interested.
Agreed. On the other hand, in order to get laws into effect it may be necessary to first have sufficient numbers of people signed up for cryonics. In that sense, signing up for cryonics might not only save your life, it might spur changes that will allow others to be preserved better (faster), potentially saving more lives.
I get the feeling that this discussion [on various threads] is fast becoming motivated cognition aiming to reach a conclusion that will reduce social tension between people who want to sign up for cryo and people who don't. I.e. "Surely there's some contrived way we can leverage our uncertainties so that you can not sign up and still be defensibly rational, and sign up and be defensibly rational".
E.g. No interest on reaching agreement on cryo success probabilities, when this seems like an absolutely crucial consideration. Is this indicative of people who genuinely want to get to the truth of the matter?
EDIT: Nick Tarleton makes a good point in reply to this comment, which I have moved to be footnote 2 in the text.
As yet another media reference. I just rewatched the Star Trek TNG episode 'the neutral zone' which deals with recovery of 3 frozen humans from our time. It was really surprising to me how much disregard for human life is shown in this episode. "Why did you recover them, they were already dead". "Oh bugger, now that you revived/healed them we have to treat them as humans". Also surprising is how much insensibility in dealing with them is shown. When you awake someone from an earlier time you might send the aliens and the robots out of the room.
Question for the advocates of cryonics: I have heard talk in the news and various places that organ donor organizations are talking about giving priority to people who have signed up to donate their organs. That is to say, if you sign up to be an organ donor, you are more likely to receive a donated organ from someone else should you need one. There is some logic in that in the absence of a market in organs; free riders have their priority reduced.
I have no idea if such an idea is politically feasible (and, let me be clear, I don't advocate it), however, w...
Thanks for this post. I tend to lurk, and I had some similar questions about the LW enthusiasm for cryo.
Here's something that puzzles me. Many people here, it seems to me, have the following preference order:
pay for my cryo > donation: x-risk reduction (through SIAI, FHI, or SENS) > paying for cryo for others
Of course, for the utilitarians among us, the question arises: why pay for my cryo over risk reduction? (If you just care about others way less than you care about yourself, fine.) Some answer by arguing that paying for your own cryo maximize...
If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying
This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you've signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we're talking about insurance policies.
Not signing up for cryonics is a rationality error on my part. What stops me is an irrational impulse I can't defeat: I seem to subsonsciously value "being normal" more than winning in this particular game. It is similar to byrnema's situation with religion a while ago. That said, I don't think any of the enumerated arguments against cryonics actually work. All such posts feel like they're writing the bottom line in advance.
Quite embarrassingly, my immediate reaction was 'What? Trying to be normal? That doesn't make sense. Europeans can't be normal anyway.' I am entirely unsure as to what cognitive process managed to create that gem of an observation.
I feel that Americans are more "professional": they can perform a more complete context-switch into the job they have to do and the rules they have to follow. In contrast, a Russian at work is usually the same slacker self as the Russian at home, or sometimes the same unbalanced work-obsessed self.
I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.
I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.
"If you don't sign up your kids for cryonics then you are a lousy parent." - E.Y.
Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.
I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don't place a large value on there being a "Violet" in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible "fixing" of brain is very high priority 5) Thus I don't want to be revived by far-future and death without cryonics seems a safe way for that
Just noting that buried in the comments Will has stated that he thinks the probability that cryo will actually save your life is one in a million -- 10^-6 -- (with some confusion surrounding the technicalities of how to actually assign that and deal with structural uncertainty).
I think that we need to iron out a consensus probability before this discussion continues.
Edit: especially since if this probability is correct, then the post no longer makes sense...
This post seems to focus too much on Singularity related issues as alternative arguments. Thus, one might think that if one assigns the Singularity a low probability one should definitely take cryonics. I'm going to therefore suggest a few arguments against cryonics that may be relevant:
First, there are other serious existential threats to humans. Many don't even arise from our technology. Large asteroids would be an obvious example. Gamma ray bursts and nearby stars going supernova are other risks. (Betelgeuse is a likely candidate for a nearby supernova...
It would be interesting to see a more thorough analysis of whether the "rational" objections to cryo actually work.
For example, the idea that money is better spent donated to some x-risk org than to your own preservation deserves closer scrutiny. Consider that cryo is cheap ($1 a day) for the young, and that getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned.
Here's another possible objection to cryonics:
If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.
"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:
Suppose the Singularity d...
Good post. People focus only on the monetary cost of cryonics, but my impression is there are also substantial costs from hassle and perceived weirdness.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
Harder, not harder, but which is actually right? This is not about signaling one's ability to do the harder thing.
The reasons you listed are not ones moving most people to not sign up for cryonics. Most people, as you mention at the beginning, simply don't take the possibility seriously enough to even consider it in detail.
I think cryonics is a great idea and should be part of health care. However, $50,000 is a lot of money to me and I'm reluctant to spend money on life insurance, which except in the case of cryonics is almost always a bad bet.
I would like my brain to be vitrified if I am dead, but I would prefer not to pay $50,000 for cryonics in the universes where I live forever, die to existential catastrophe, or where cryonics just doesn't work.
What if I specify in my (currently non-existent) cryonics optimized living will that up to $100,000 from my estate is to be used to pay for cryonics? It's not nearly as secure as a real cryonics contract, but it has the benefit of not costing $50,000.
I'm surprised that you didn't bring up what I find to be a fairly obvious problem with Cryonics: what if nobody feels like unthawing you? Of course, not having followed this dialogue I'm probably missing some equally obvious counter to this argument.
Hi, I'm pretty new here too. I hope I'm not repeating an old argument, but suspect I am; feel free to answer with a pointer instead of a direct rebuttal.
I'm surprised that no-one's mentioned the cost of cryonics in relation to the reduction in net human suffering that could come from spending the money on poverty relief instead. For (say) USD $50k, I could save around 100 lives ($500/life is a current rough estimate at lifesaving aid for people in extreme poverty), or could dramatically increase the quality of life of 1000 people (for example, cataract o...
Another argument against cryonics is just that it's relatively unlikely to work (= lead to your happy revival) since it requires several things to go right. Robin's net present value calculation of the expected benefits of cryonic preservation isn't all that different from the cost of cryonics. With slightly different estimates for some of the numbers, it would be easy to end up with an expected benefit that's less than the cost.
et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.
This argument from confusion doesn't shift the decision either way, so it could as well be an argument for signing up, or against signing up; similarly for immediate suicide, or against that. On the net, this argument doesn't move, because there is no default to fall off to once you get more confused.
I am kind of disturbed by the idea of cryonics. Wouldn't it be theoretically possible to prove they don't work, assuming that they really don't. If the connections between neurons are lost in the process, then you have died.
Interesting post, but perhaps too much is being compressed into a single expression.
The niceness and weirdness factors of thinking about cryonics do not actually affect the correctness of cryonics itself. The correctness factor depends only on one's values and the weight of probability.
Not thinking one's own values through sufficiently enough to make an accurate evaluation is both irrational and a common failure mode. Miscalculating the probabilities is also a mistake, though perhaps more a mathematical error than a rationality error.
When these are the r...
I have been heavily leaning towards the anti-cryonics stance at least for myself with the current state of information and technology. My reasons are mostly the following.
I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous...
Reason #7 not to sign up: There is a significant chance that you will suffer information-theoretic death before your brain can be subjected to the preservation process. Your brain could be destroyed by whatever it is that causes you to die (such as a head injury or massive stroke) or you could succumb to age-related dementia before the rest of your body stops functioning.
I don't understand the big deal with this. Is it just selfishness? You don't care how good the world will be, unless you're there to enjoy it?
There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical...
Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)
It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)
Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):
Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:
Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.
One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.
1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.
2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.
3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.
4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.