79

Written with much help from Nick Tarleton and Kaj Sotala, in response to various themes here, here, and throughout Less Wrong; but a casual mention here1 inspired me to finally write this post. (Note: The first, second, and third footnotes of this post are abnormally important.)

It seems to have become a trend on Less Wrong for people to include belief in the rationality of signing up for cryonics as an obviously correct position2 to take, much the same as thinking the theories of continental drift or anthropogenic global warming are almost certainly correct. I find this mildly disturbing on two counts. First, it really isn't all that obvious that signing up for cryonics is the best use of one's time and money. And second, regardless of whether cryonics turns out to have been the best choice all along, ostracizing those who do not find signing up for cryonics obvious is not at all helpful for people struggling to become more rational. Below I try to provide some decent arguments against signing up for cryonics — not with the aim of showing that signing up for cryonics is wrong, but simply to show that it is not obviously correct, and why it shouldn't be treated as such. (Please note that I am not arguing against the feasibility of cryopreservation!)

Signing up for cryonics is not obviously correct, and especially cannot obviously be expected to have been correct upon due reflection (even if it was the best decision given the uncertainty at the time):

  • Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.3 This does not make cryonics a bad idea — it may be the correct decision under uncertainty — but it should lessen anyone's confidence that the balance of reasons ultimately weighs overwhelmingly in favor of cryonics.
  • If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying: either everyone (including cryonauts) dies anyway when an unFriendly artificial intelligence goes FOOM, or a Friendly artificial intelligence is created and death is solved (or reflectively embraced as good, or some other unexpected outcome). This is more salient when considering the likelihood of large advances in biomedical and life extension technologies in the near future.
  • A person might find that more good is done by donating money to organizations like SENS, FHI, or SIAI4 than by spending that money on pursuing a small chance of eternal life. Cryonics working is pretty dependent on e.g. an unFriendly artificial intelligence not going FOOM, or molecular nanotechnology not killing everyone. Many people may believe that a slightly higher chance of a positive singularity is more important than a significantly higher chance of personal immortality. Likewise, having their friends and family not be killed by an existential disaster such as rogue MNT, bioweaponry, et cetera, could very well be more important to them than a chance at eternal life. Acknowledging these varied preferences, and varied beliefs about one's ability to sacrifice only luxury spending to cryonics, leads to equally varied subjectively rational courses of action for a person to take.
  • Some people may have loose boundaries around what they consider personal identity, or expect personal identity to be less important in the future. Such a person might not place very high value on ensuring that they, in a strong sense, exist in the far future, if they expect that people sufficiently like them to satisfy their relevant values will exist in any case. (Kaj Sotala reports being  indifferent to cryonics due to personal identity considerations here.) Furthermore, there exist people who have preferences against (or no preferences either for or against) living extremely far into the future for reasons other than considerations about personal identity. Such cases are rare, but I suspect less rare among the Less Wrong population than most, and their existence should be recognized. (Maybe people who think they don't care are usually wrong, and, if so, irrational in an important sense, but not in the sense of simple epistemic or instrumental-given-fixed-values rationality that discussions of cryonics usually center on.)
  • That said, the reverse is true: not getting signed up for cryonics is also not obviously correct. The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong. Strong arguments are being ignored on both sides. The common enemy is certainty.

Calling non-cryonauts irrational is not productive nor conducive to fostering a good epistemic atmosphere:

  • Whether it's correct or not, it seems unreasonable to claim that the decision to forgo cryonics in favor of donating (a greater expected amount) to FHI, SIAI4, SENS, etc. represents as obvious an error as, for instance, religion. The possibility of a third option here shouldn't be ignored.
  • People will not take a fringe subject more seriously simply because you call them irrational for not seeing it as obvious (as opposed to belief in anthropogenic global warming where a sheer bandwagon effect is enough of a memetic pull). Being forced on the  defensive makes one less likely to accept and therefore overcome their own irrationalities, if irrationalities they are. (See also: A Suite of Pragmatic Considerations in Favor of Niceness)
  • As mentioned in bullet four above, some people really wouldn't care if they died, even if it turned out MWI, spatially infinite universes, et cetera were wrong hypotheses and that they only had this one shot at existence. It's not helping things to call them irrational when they may already have low self-esteem and problems with being accepted among those who have very different values pertaining to the importance of continued subjective experience. Likewise, calling people irrational for having kids when they could not afford cryonics for them is extremely unlikely to do any good for anyone.

Debate over cryonics is only one of many opportunities for politics-like thinking to taint the epistemic waters of a rationalist community; it is a topic where it is easy to say 'we are right and you are wrong' where 'we' and 'you' are much too poorly defined to be used without disclaimers. If 'you' really means 'you people who don't understand reductionist thinking', or 'you people who haven't considered the impact of existential risk', then it is important to say so. If such an epistemic norm is not established I fear that the quality of discourse at Less Wrong will suffer for the lack of it.

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

 

1 I don't disagree with Roko's real point, that the prevailing attitude towards cryonics is decisive evidence that people are crazy and the world is mad. Given uncertainty about whether one's real values would endorse signing up for cryonics, it's not plausible that the staggering potential benefit would fail to recommend extremely careful reasoning about the subject, and investment of plenty of resources if such reasoning didn't come up with a confident no. Even if the decision not to sign up for cryonics were obviously correct upon even a moderate level of reflection, it would still constitute a serious failure of instrumental rationality to make that decision non-reflectively and independently of its correctness, as almost everyone does. I think that usually when someone brings up the obvious correctness of cryonics, they mostly just mean to make this observation, which is no less sound even if cryonics isn't obviously correct.

2 To those who would immediately respond that signing up for cryonics is obviously correct, either for you or for people generally, it seems you could mean two very different things: Do you believe that signing up for cryonics is the best course of action given your level of uncertainty? or, Do you believe that signing up for cryonics can obviously be expected to have been correct upon due reflection? (That is, would you expect a logically omniscient agent to sign up for cryonics in roughly your situation given your utility function?) One is a statement about your decision algorithm, another is a statement about your meta-level uncertainty. I am primarily (though not entirely) arguing against the epistemic correctness of making a strong statement such as the latter.

3 By raising this point as an objection to strong certainty in cryonics specifically, I am essentially bludgeoning a fly with a sledgehammer. With much generalization and effort this post could also have been written as 'Abnormal Everything'. Structural uncertainty is a potent force and the various effects it has on whether or not 'it all adds up to normality' would not fit in the margin of this post. However, Nick Tarleton and I have expressed interest in writing a pseudo-sequence on the subject. We're just not sure about how to format it, and it might or might not come to fruition. If so, this would be the first post in the 'sequence'.

4 Disclaimer and alert to potential bias: I'm an intern (not any sort of Fellow) at the Singularity Institute for (or 'against' or 'ambivalent about' if that is what, upon due reflection, is seen as the best stance) Artificial Intelligence.

New Comment
420 comments, sorted by Click to highlight new comments since: Today at 6:35 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Getting back down to earth, there has been renewed interest in medical circles in the potential of induced hibernation, for short-term suspended animation. The nice trustworthy doctors in lab coats, the ones who get interviews on TV, are all reassuringly behind this, so this will be smoothly brought into the mainstream, and Joe the Plumber can't wait to get "frozed-up" at the hospital so he can tell all his buddies about it.

Once induced hibernation becomes mainstream, cryonics can simply (and misleadingly, but successfully) be explained as "hibernation for a long time."

Hibernation will likely become a commonly used "last resort" for many many critical cases (instead of letting them die, you freeze 'em until you've gone over their chart another time, talked to some colleagues, called around to see if anyone has an extra kidney, or even just sleep on it, at least.) When your loved one is in the fridge, and you're being told that there's nothing left to do, we're going to have to thaw them and watch them die, your next question is going to be "Can we leave them in the fridge a bit longer?"

Hibernation will sell people on the idea that fridges sa... (read more)

7cousin_it14y
I don't think you stumbled on any good point against cryonics, but the scenario you described sounds very reassuring. Do you have any links on current hibernation research?

Maybe it's a point against investing directly into cryonics as it exists today, and working more through the indirect approach that is most likely to lead to good cryonics sooner. I'm much much more interested in being preserved before I'm brain-dead.

I'm looking for specifics on human hibernation. Lots of sci-fi out there, but more and more hard science as well, especially in recent years. There's the genetic approach, and the hydrogen sulfide approach.

March 2010: Mark Roth at TED

...by the way, the comments threads on the TED website could use a few more rationalists... Lots of smart people there thinking with the wrong body parts.

May 2009: NIH awards a $2,227,500 grant

2006: Doctors chill, operate on, and revive a pig

3magfrump14y
Voted up for extensive linkage

An interesting comparison I mentioned previously: the cost to Alcor of preserving one human (full-body) is $150,000. The recent full annual budget of SIAI is on the order of (edit:) $500,000.

9alyssavance14y
Cryonics Institute is a factor of 5 cheaper than that, the SIAI budget is larger than that, and SIAI cannot be funded through life insurance while cryonics can. And most people who read this aren't actually substantial SIAI donors.
5Rain14y
You can't assign a life insurance policy to a non-profit organization? Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics? Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post? Do people who aren't donors not want to know potential cost ratios regarding the arguments specifically made by the top level post?
3alyssavance14y
"You can't assign a life insurance policy to a non-profit organization?" You can, but it probably won't pay out until relatively far into the future, and because of SIAI's high discount rate, money in the far future isn't worth much. "Is the long-term viability of low-cost cryonics a known quantity? Is it noticeably similar to the viability of high-cost cryonics?" Yes. The Cryonics Institute has been in operation since 1976 (35 years) and is very financially stable. "Did Michael Anissimov, Media Director for SIAI, when citing specific financial data available on Guidestar, lie about SIAI's budget in the linked blog post?" Probably not, he just wasn't being precise. SIAI's financial data for 2008 is available here (guidestar.org) for anyone who doesn't believe me.
8Rain14y
Please provide evidence for this claim. I've heard contradictory statements to the effect that even $150,000 likely isn't enough for long term viability. I'm curious how the statement, "our annual budget is in the $200,000/year range", may be considered "imprecise" rather than outright false when compared with data from the source he cited. SIAI Total Expenses (IRS form 990, line 17): * 2006: $395,567 * 2007: $306,499 * 2008: $614,822
6CarlShulman14y
I sent Anissimov an email asking him to clarify. He may have been netting out Summit expenses (matching cost of venue, speaker arrangements, etc against tickets to net things out). Also note that 2008 was followed by a turnover of all the SIAI staff except Eliezer Yudkowsky, and Michael Vassar then cut costs.

Hi all,

I was completely wrong on my budget estimate, I apologize. I wasn't including the Summit, and I was just estimating the cost on my understanding of salaries + misc. expenses. I should have checked Guidestar. My view of the budget also seems to have been slightly skewed because I frequently check the SIAI Paypal account, which many people use to donate, but I never see the incoming checks, which are rarer but sometimes make up a large portion of total donations. My underestimate of money in contributing to my underestimating monies out.

Again, I'm sorry, I was not lying, just a little confused and a few years out of date on my estimate. I will search over my blog to modify any incorrect numbers I can find.

1Rain14y
Thank you for the correction.
0[anonymous]14y
You could fund SIAI through life insurance if you list them as a beneficiary just as you would with cryonics.
7Robin14y
That's a very good point. It seems there is some dispute about the numbers but the general point is that it would be a lot cheaper to fund SIAI which may save the world than to cryogenically freeze even a small fraction of the world's population. The point about life insurance is moot. Life insurance companies make a profit so having SIAI as your beneficiary upon death wouldn't even make that much sense. If you just give whatever you'd be paying in life insurance premiums directly to SIAI, you're probably doing much more overall good than paying for a cryonics policy.
2Roko14y
CI costs $30K, and you only have to pay about 9K if you're young, and not up front -- you just pay your insurance premiums.

I haven't yet read and thought enough about this topic to form a very solid opinion, but I have two remarks nevertheless.

First, as some previous commenters have pointed out, most of the discussions of cryonics fail to fully appreciate the problem of weirdness signals. For people whose lives don't revolve around communities that are supportive of such undertakings, the cost of signaled weirdness can easily be far larger than the monetary price. Of course, you can argue that this is because the public opinion on the topic is irrational and deluded, but the point is that given the present state of public opinion, which is impossible to change by individual action, it is individually rational to take this cost into account. (Whether the benefits ultimately overshadow this cost is a different question.)

Second, it is my impression that many cryonics advocates -- and in particular, many of those whose comments I've read on Overcoming Bias and here -- make unjustified assertions about supposedly rational ways to decide the question of what entities one should identify oneself with. According to them, signing up for cryonics increases the chances that at some distant time in the future, i... (read more)

7Roko14y
Would it change your mind if you discovered that you're living in a simulation right now?
0Vladimir_M14y
Roko: It would probably depend on the exact nature of the evidence that would support this discovery. I allow for the possibility that some sorts of hypothetical experiences and insights that would have the result of convincing me that we live in a simulation would also have the effect of dramatically changing my intuitions about the question of personal identity. However, mere thought-experiment considerations of those I can imagine presently fail to produce any such change. I also allow for the possibility that this is due to the limitations of my imagination and reasoning, perhaps caused by unidentified biases, and that actual exposure to some hypothetical (and presently counterfactual) evidence that I've already thought about could perhaps have a different effect on me than I presently expect it would. For full disclosure, I should add that I see some deeper problems with the simulation argument that I don't think are addressed in a satisfactory manner in the treatments of the subject I've seen so far, but that's a whole different can of worms.
4Roko14y
Well, a concrete scenario would be that the simulators calmly reveal themselves to you and demonstrate that they can break the laws of physics, for example by just wiggling the sun around in the sky, disconnecting your limbs without blood coming out or pain, making you float, etc.
1Vladimir_M14y
That would fall under the "evidence that I've already thought about" mentioned above. My intuitions would undoubtedly be shaken and moved, perhaps in directions that I presently can't even imagine. However, ultimately, I think I would be led to conclude that the whole concept of "oneself" is fundamentally incoherent, and that the inclination to hold any future entity or entities in special regard as "one's future self" is just a subjective whim. (See also my replies to kodos96 in this thread.)
1Roko14y
Interesting! Seems a bit odd to me, but perhaps we should chat in more detail some time.
6kodos9614y
Would it change your mind if that computer program [claimed to] strongly identify with you?
3Vladimir_M14y
I'm not sure I understand your question correctly. The mere fact that a program outputs sentences that express strong claims about identifying with me would not be relevant in any way I can think of. Or am I missing something in your question?
3kodos9614y
Well right, obviously a program consiting of "printf("I am Vladmir_M")" wouldn't qualify... but a program which convincingly claimed to be you.. i.e. had access to all your memories, intellect, inner thoughts etc, and claimed to be the same person as you.
3Vladimir_M14y
No, as I wrote above, I am honestly unable to feel any identification at all with such a program. It might as well be just a while(1) loop printing a sentence claiming it's me. I know of some good arguments that seem to provide a convincing reductio ad absurdum of such a strong position, most notably the "fading qualia" argument by David Chalmers, but on the other hand, I also see ways in which the opposite view entails absurdity (e.g. the duplication arguments). Thus, I don't see any basis for forming an opinion here except sheer intuition, which in my case strongly rebels against identification with an upload or anything similar.
7kodos9614y
If you woke up tomorrow to find yourself situated in a robot body, and were informed that you had been killed in an accident and your mind had been uploaded and was now running on a computer, but you still felt, subjectively, entirely like "yourself", how would you react? Or do you not think that that could ever happen? (that would be a perfectly valid answer, I'm just curious what you think, since I've never had the opportunity to discuss these issues with someone who was familiar with the standard arguments, yet denied the possibility)
9Vladimir_M14y
For the robotic "me" -- though not for anyone else -- this would provide a conclusive answer to the question of whether uploads and other computer programs can have subjective experiences. However, although fascinating, this finding would provide only a necessary, not a sufficient condition for a positive answer to the question we're pursuing, namely whether there is any rational reason (as opposed to freely variable subjective intuitions and preferences) to identify this entity with my present self. Therefore, my answer would be that I don't know how exactly the subjective intuitions and convictions of the robotic "me" would develop from this point on. It may well be that he would end up feeling strongly as the true continuation of my person and rejecting what he would remember as my present intuitions on the matter (though this would be complicated by the presumable easiness of making other copies). However, I don't think he would have any rational reason to conclude that it is somehow factually true that he is the continuation of my person, rather than some entirely different entity that has been implanted false memories identical to my present ones. Of course, I am aware that a similar argument can be applied to the "normal me" who will presumably wake up in my bed tomorrow morning. Trouble is, I would honestly find it much easier to stop caring about what happens to me tomorrow than to start caring about computer simulations of myself. Ultimately, it seems to me that the standard arguments that are supposed to convince people to broaden their parochial concepts of personal identity should in fact lead one to dissolve the entire concept as an irrational reification that is of no concern except that it's a matter of strong subjective preferences.
8jimrandomh14y
Getting copied from a frozen brain into a computer is a pretty drastic change, but suppose instead it were done gradually, one neuron at a time. If one of your neurons were replaced with an implant that behaved the same way, would it still be you? A cluster of N neurons? What if you replaced your entire brain with electronics, a little at a time? Obviously there is a difference, and that difference is significant to identity; but I think that difference is more like the difference between me and my younger self than the difference between me and someone else.
6JoshuaZ14y
While I understand why someone would see the upload as possibly not themselves (and I have strong sympathy with that position), I do find it genuinely puzzling that someone wouldn't identify their revived body as themselves. While some people might argue that they have no connection to the entity that will have their memories a few seconds from now, the vast majority of humans don't buy into that argument. If they don't, then it is hard to see how a human which is cooled and then revived is any different than a human which who has their heart stopped for a bit as they have a heart transplant, or for someone who stops breathing in a very cold environment for a few minutes, or someone who goes to sleep under an anesthesia, or even someone who goes to sleep normally and wakes up in the morning. Your point about weirdness signaling is a good one, and I'd expand on it slightly: For much of society, even thinking about weird things at a minimal level is a severe weirdness signal. So for many people, the possible utility of any random weird idea is likely to be so low that even putting in effort to think about it will almost certainly outweigh any benefit. And when one considers how many weird ideas are out there, the chance that any given one of them will turn out to be useful is very low. To use just a few examples, just how many religions are there? How many conspiracy theories? How many miracle cures? Indeed, the vast majority of these, almost all LW readers will never investigate for essentially this sort of utility heuristic.
5Vladimir_M14y
JoshuaZ: The problem here is one of continuum. We can easily imagine a continuum of procedures where on one end we have relatively small ones that intuitively appear to preserve the subject's identity (like sleep or anesthesia), and on the other end more radical ones that intuitively appear to end up destroying the original and creating a different person. By Buridan's principle, this situation implies that for anyone whose intuitions give different answers for the procedures at the opposite ends of the continuum, at least some procedures that lie inbetween will result in confused and indecisive intuitions. For me, cryonic revival seems to be such a point. In any case, I honestly don't see any way to establish, as a matter of more than just subjective opinion, at which exact point in that continuum personal identity is no longer preserved.
7Will_Newsome14y
This seems similar to something that I'll arbitrarily decide to call the 'argument from arbitrariness': every valid argument should be pretty and neat and follow the zero, one, infinity rule. One example of this was during the torture versus dust specks debate, when the torturers chided the dust speckers for having an arbitrary point at which stimuli that were not painful enough to be considered true pain became just painful enough to be considered as being in the same reference class as torture. I'd be really interested to find out how often something like the argument from arbitrariness turns out to have been made by those on the ultimately correct side of the argument, and use this information as a sort of outside view.

I share the position that Kaj_Sotala outlined here: http://lesswrong.com/lw/1mc/normal_cryonics/1hah

In the relevant sense there is no difference between the Richard that wakes up in my bed tomorrow and the Richard that might be revived after cryonic preservation. Neither of them is a continuation of my self in the relevant sense because no such entity exists. However, evolution has given me the illusion that tomorrow-Richard is a continuation of my self, and no matter how much I might want to shake off that illusion I can't. On the other hand, I have no equivalent illusion that cryonics-Richard is a continuation of my self. If you have that illusion you will probably be motivated to have yourself preserved.

Ultimately this is not a matter of fact but a matter of personal preference. Our preferences cannot be reduced to mere matters of rational fact. As David Hume famously wrote: "'Tis not contrary to reason to prefer the destruction of the whole world to the scratching of my finger." I prefer the well-being of tomorrow-Richard to his suffering. I have little or no preference regarding the fate of cryonics-Richard.

5JenniferRM14y
I don't mean to insult you (I'm trying to respect your intelligence enough to speak directly rather than delicately) but this kind of talk is why cryonics seems like a pretty useful indicator of whether or not a person is rational. You're admitting to false beliefs that you hold "because you evolved that way" rather than using reason to reconcile two intuitions that you "sort of follow" but which contradict each other. Then you completely discounted the suffering or happiness of a human being who is not able to be helped by anyone other than your present self in this matter. You certainly can't be forced to seek medical treatment against your will for this, so other people are pretty much barred by law from forcing you to not be dumb with respect to the fate of future-Richard. He is in no one's hands but your own. Hume was right about a huge amount of stuff in the context of initial epistemic conditions of the sort that Descartes proposed when he extracted "I think therefore I am" as one basis for a stable starting point. But starting from that idea and a handful of others like "trust of our own memories as a sound basis for induction" we have countless terabytes of sense data from which we can develop a model of the universe that includes physical objects with continuity over time - one class of which are human brains that appear to be capable of physically computing the same thoughts with which we started out in our "initial epistemic conditions". The circle closes here. There might be some new evidence somewhere if some kind of Cartesian pineal gland is discovered someday which functions as the joystick by which souls manipulate bodies, but barring some pretty spectacular evidence, materialist views of the soul are the best theory standing. Your brain has physical continuity in exactly the same way that chairs have physical continuity, and your brain tomorrow (after sleeping tonight while engaging in physical self repair and re-indexing of data structures) wi

Hi Jennifer. Perhaps I seem irrational because you haven't understood me. In fact I find it difficult to see much of your post as a response to anything I actually wrote.

No doubt I explained myself poorly on the subject of the continuity of the self. I won't dwell on that. The main question for me is whether I have a rational reason to be concerned about what tomorrow-Richard will experience. And I say there is no such rational reason. It is simply a matter of brute fact that I am concerned about what he will experience. (Vladimir and Byrnema are making similar points above.) If I have no rational reason to be concerned, then it cannot be irrational for me not to be concerned. If you think I have a rational reason to be concerned, please tell me what it is.

2Blueberry14y
I don't understand why psychological continuity isn't enough of a rational reason. Your future self will have all your memories, thoughts, viewpoints, and values, and you will experience a continuous flow of perception from yourself now to your future self. (If you sleep or undergo general anesthesia in the interim, the flow may be interrupted slightly, but I don't see why that matters.)
3RichardW14y
Hi Blueberry. How is that a rational reason for me to care what I will experience tomorrow? If I don't care what I will experience tomorrow, then I have no reason to care that my future self will have my memories or that he will have experienced a continuous flow of perception up to that time. We have to have some motivation (a goal, desire, care, etc) before we can have a rational reason to do anything. Our most basic motivations cannot themselves be rationally justified. They just are what they are. Of course, they can be rationally explained. My care for my future welfare can be explained as an evolved adaptive trait. But that only tells me why I do care for my future welfare, not why I rationally should care for my future welfare.
1JenniferRM14y
Richard, you seem to have come to a quite logical conclusion about the difference between intrinsic values and instrumental values and what happens when an attempt is made to give a justification for intrinsic values at the level of values. If a proposed intrinsic value is questioned and justified with another value statement, then the supposed "intrinsic value" is revealed to have really been instrumental. Alternatively, if no value is offered then the discussion will have necessarily moved out of the value domain into questions about the psychology or neurons or souls or evolutionary mechanisms or some other messy issue of "simple" fact. And you are quite right that these facts (by definition as "non value statements") will not be motivating. We fundamentally like vanilla (if we do) "because we like vanilla" as a brute fact. De gustibus non est disputandum. Yay for the philosophy of values :-P On the other hand... basically all humans, as a matter of fact, do share many preferences, not just for obvious things like foods that are sweet or salty or savory but also for really complicated high level things, like the respect of those with whom we regularly spend time, the ability to contribute to things larger than ourselves, listening to beautiful music, and enjoyment of situations that create "flow" where moderately challenging tasks with instantaneous feedback can be worked on without distraction, and so on. As a matter of simple observation, you must have noticed that there exist some things which it gives you pleasure to experience. To say that "I don't care what I will experience tomorrow" can be interpreted as a prediction that "Tomorrow, despite being conscious, I will not experience anything which affects my emotions, preferences, feelings, or inclinations in either positive or negative directions". This statement is either bluntly false (my favored hypothesis), or else you are experiencing a shocking level of anhedonia for which you should seek professio
1byrnema14y
Why is psychological continuity important? (I can see that it's very important for an identity to have psychological continuity, but I don't see the intrinsic value of an identity existing if it is promised to have psychological continuity.) In our lives, we are trained to worry about our future self because eventually our plans for our future self will affect our immediate self. We also might care about our future self altruistically: we want that person to be happy just as we would want any person to be happy whose happiness we are responsible for. However, I don't sense any responsibility to care about a future self that needn't exist. On the contrary, if this person has no effect on anything that matters to me, I'd rather be free of being responsible for this future self. In the case of cryogenics, you may or may not decide that your future self has an effect on things that matter to you. If your descendants matter to you, or propagating a certain set of goals matters to you, then cryonics makes sense. I don't have any goals that project further than the lifespan of my children. This might be somewhat unique, and it is the result of recent changes in philosophy. As a theist, I had broad-stroke hopes for the universe that are now gone. Less unique, I think, though perhaps not generally realized, is the fact that I don't feel any special attachment to my memories, thoughts, viewpoints and values. What if a person woke up to discover that the last days were a dream and they actually had a different identity? I think they wouldn't be depressed about the loss of their previous identity. They might be depressed about the loss of certain attachments if the attachments remained (hopefully not too strongly, as that would be sad). They salient thing here is that all identities feel the same.
4RichardW14y
I've just read this article by Ben Best (President of CI): http://www.benbest.com/philo/doubles.html He admits that the possibility of duplicating a person raises a serious question about the nature of personal identity, that continuity is no solution to this problem, and that he can find no other solution. But he doesn't seem to consider that the absence of any solution points to his concept of personal identity being fundamentally flawed.
2byrnema14y
Interesting. However, I don't see any problems with the nature of personal identity. My hunch is that I'm actually not confused about it. In a lifetime, there is continuity of memories and continuity of values and goals even as they slowly change over time. I can trust that the person who wakes up tomorrow will be 'me' in this sense. She may be more refreshed and have more information, but I trust her her to act as "I" would. On the other hand, she might be excessively grouchy or suffer a brain injury, in which case this trust is misplaced. However, she is not me personal-identity wise for a variety of reasons: * I do not have access to her stream of consciousness. * I do not have operative control of her body. [In both cases, the reason is because her thoughts and actions take place in the future. Eventually, I will have access to her thoughts and control of her body and then she becomes "me".] * Personal identity exists only for a moment. It is the running of some type of mental thought process. Suppose I was duplicated overnight, and two byrnemas woke up in the morning. Both byrnemas would have continuity with the previous byrnema with respect to memories, values and goals. However, neither of them are the personal identity of byrnema of the night before just as whenever I wake up I'm not the personal identity of the night before, exactly for the reasons I bulleted. With the two duplicates, there would be two distinct personal identities. You simply count the number of independent accesses to thoughts and motor control of bodies and arrive at two. Both byrnema have a subjective experience of personal identity, of course, and consider the other byrnema an "other". However, this "other" is similar to oneself in a way that is unprecedented, a twin sister that also has your memories, goals and values. I think duplicates would be most problematic for loved ones. They would find themselves in a position of loving both duplicates, and being able to empathize w
1NancyLebovitz14y
If you care about a person, then while you might not care as much if a recent duplicate or a recently duplicated person were lost, you would still care about as much if either of them suffers.. As is implied by my 'recently', the two will diverge, and you might end up with loyalty to both as distinct individuals, or with a preference for one of them. Also, I don't think parents value each of newborn twins less because they have a spare.
7Eneasz14y
I'm in the signing process right now, and I wanted to comment on the "work in progress" aspect of your statement. People think that signing up for cyronics is hard. That it takes work. I thought this myself up until a few weeks ago. This is stunningly NOT true. The entire process is amazingly simple. You contact CI (or your preserver of choice) via their email address and express interest. They ask you for a few bits of info (name, address) and send you everything you need already printed and filled out. All you have to do is sign your name a few times and send it back. The process of getting life insurance was harder (and getting life insurance is trivially easy). So yeah, the term "working on it" is not correctly applicable to this situation. Someone who's never climbed a flight of stairs may work out for months in preparation, but they really don't need to, and afterwards might be somewhat annoyed that no one who'd climbed stairs before had bothered to tell them so. Literally the only hard part is the psychological effort of doing something considered so weird. The hardest part for me (and what had stopped me for two+ years previously) was telling my insurance agent when she asked "What's CI?" that it's a place that'll freeze me when I die. I failed to take into account that we have an incredibly tolerant society. People interact - on a daily basis - with other humans who believe in gods and energy crystals and alien visits and secret-muslim presidents without batting an eye. This was no different. It was like the first time you leap from the high diving board and don't die, and realize that you never would have.
8JenniferRM14y
The hard part (and why this is also a work in progress) involve secondary optimizations, the right amount of effort to put into them, and understanding whether these issues generalize to other parts of my life. SilasBartas identified some of the practical financial details involved in setting up whole life versus term plus savings versus some other option. This is even more complex for me because I don't currently have health insurance and ideally would like to have a personal physician, health insurance, and retirement savings plan that are consistent with whatever cryonics situation I set up. Secondarily, there are similarly complex social issues that come up because I'm married, love my family, am able to have philosophical conversations them, and don't want to "succeed" at cryonics but then wake up for 1000 years of guilt that I didn't help my family "win" too. If they don't also win, when I could have helped them, then what kind of a daughter or sister would I be? Finally, I've worked on a personal version of a "drake equation for cryonics" and it honestly wasn't a slam dunk economic decision when I took a pessimistic outside view of my model. So it would seem that more analysis here would be prudent, which would logically require some time to perform. If I had something solid I imagine that would help convince my family - given that they are generally rational in their own personal ways :-) Finally, as a meta issue, there are issues around cognitive inertia in both the financial and the social arenas so that whatever decisions I make now, may "stick" for the next forty years. Against this I weigh the issue of "best being the enemy of good" because (in point of fact) I'm not safe in any way at all right now... which is an obvious negative. In what places should I be willing to tolerate erroneous thinking and sloppy execution that fails to obtain the maximum lifetime benefit and to what degree should I carry that "sloppiness calibration" over to the rest of
2Eneasz14y
You can't hold yourself responsible for their decisions. That way lies madness, or tyranny. If you respect them as free agents then you can't view yourself as the primary source for their actions.
4DSimon14y
It might be rational to do so under extreme enough circumstances. For example, if a loved one had to take pills every day to stay alive and had a tendency to accidentally forget them (or to believe new-agers who told them that the pills were just a Big Pharma conspiracy), it would be neither madness nor tyranny to do nearly anything to prevent that from happening. The question is: to what degree is failing to sign up for cryonics like suicide by negligence?
2Alicorn14y
I'm not finding this. Can you refer me to your trivially easy agency?
3Eneasz14y
I used State Farm, because I've had car insurance with them since I could drive, and renters/owner's insurance since I moved out on my own. I had discounts both for multi-line and loyalty. Yes, there is some interaction with a person involved. And you have to sit through some amount of sales-pitching. But ultimately it boils down to answering a few questions (2-3 minutes), signing a few papers (1-2 minutes), sitting through some process & pitching (30-40 minutes), and then having someone come to your house a few days later to take some blood and measurements (10-15 minutes). Everything else was done via mail/email/fax. Heck, my agent had to do much more work than I did, previous to this she didn't know that you can designate someone other than yourself as the owner of the policy, required some training.
4Alicorn14y
I tried a State Farm guy, and he was nice enough, but he wanted a saliva sample (not blood) and could not tell me what it was for. He gave me an explicitly partial list but couldn't complete it for me. That was spooky. I don't want to do that.
2Eneasz14y
Huh. That is weird. I don't blame you. Come to think of it, I didn't even bother asking what the blood sample was for. But I tend to be exceptionally un-private. I don't expect privacy to be a part of life among beings who regularly share their source code.
6Alicorn14y
It's not a matter of privacy. I can't think of much they'd put on the list that I wouldn't be willing to let them have. (The agent acted like I could only possibly be worried that they were going to do genetic testing, but I'd let them do that as long as they, you know, told me, and gave me a copy of the results.) It was just really not okay with me that they wanted it for undisclosed purposes. Lack of privacy and secrets shouldn't be unilateral.
1SilasBarta14y
Disagree. What's this trivially easy part? You can't buy it like you can buy mutual fund shares, where you just go online, transfer the money, and have at it. They make it so you have to talk to an actual human insurance agent, just to get quotes. (I understand you'll have to get a medical exam, but still...) Of course, in fairness, I'm trying to combine it with "infinite banking" by getting a whole life policy, which has tax advantages. (I would think whole life would make more sense than term anyway, since you don't want to limit the policy to a specific term, risking that you'll die afterward and no be able to afford the preservation, when the take-off hasn't happened.)
1Blueberry14y
Nope. Whole life is a colossal waste of money. If you buy term and invest the difference in the premiums (what you would be paying the insurance company if you bought whole life) you'll end up way ahead.
3SilasBarta14y
Yes, I'm intimately familiar with the argument. And while I'm not committed to whole life, this particular point is extremely unpersuasive to me. For one thing, the extra cost for whole is mostly retained by you, nearly as if you had never spent it, which make it questionable how much of that extra cost is really a cost. That money goes into an account which you can withdraw from, or borrow from on much more favorable terms than any commercial loan. It also earns dividends and guaranteed interest tax-free. If you "buy term and invest the difference", you either have to pay significant taxes on any gains (or even, in some cases, the principle) or lock it up the money until you're ~60. The optimistic "long term" returns of the stock market have shown to be a bit too optimistic, and given the volatility, you are being undercompensated. (Mutual whole life plans typically earned over 6% in '08, when stocks tanked.) You are also unlikely to earn the 12%/year they always pitch for mutual funds -- and especially not after taxes. Furthermore, if the tax advantages of IRAs are reneged on (which given developed countries' fiscal situations, is looking more likely every day), they'll most likely be hit before life insurance policies. So yes, I'm aware of the argument, but there's a lot about the calculation that people miss.
7HughRistik14y
It's really hard to understand insurance products with the information available on the internet, and you are right that it is extremely unfriendly to online research. When I investigated whole life vs. term a few years ago, I came to the conclusions that there are a lot of problems with whole life and I wouldn't touch it with a ten foot pole. Actually, there is something far weirder and insidious going on. By "extra cost," I assume you are referring to the extra premium that goes into the insurance company's cash value investment account, beyond the amount of premium that goes towards your death benefit (aka "face amount," aka "what the insurance company pays to your beneficiary if you die while the policy is in force). Wait, what? Didn't I mean your cash value account, and my words the "insurance company's cash value account" were a slip of the tongue? Read on... Let's take a look at the FAQ of the NY Dept. of Insurance which explains the difference between the face amount of your policy (aka "death benefit" aka "what the insurance company pays to your beneficiary if you die while the policy is in force): So, you have a $1 million face amount insurance policy. The premiums are set so that by age 100, "your" cash value investment account will have a value of $1 million. If you die right before turning 100, how much money will your beneficiary get? If you guessed $1 million face amount + $1 million cash value account = $2 million, you guessed wrong. See the last quoted sentence: "If you die your beneficiaries will receive the face amount." Your beneficiary gets the $1 million face amount, but the insurance company keeps the $1 million investment account to offset their loss (which would instead go to your beneficiary if you had done "buy term and invest the difference). This is because the cash value account is not your money anymore. The account belongs to the insurance company; I've read whole life policies and seen this stated in the fine print that people d
2NoMLM14y
I believe "buy term and invest the difference" is the slogan of the Amway-like Multi Level Marketer (MLM, legal pyramid scheme) Primerica.
0HughRistik14y
That's how I first encountered it, too. But it seems to be mainstream and widely accepted advice that is confirmed independently.
0Blueberry14y
Wow, thanks for all that! Upvoted. I'm biased in favor of DIY, but those are really good points and I didn't realize some of that.
0SilasBarta14y
Hey, glad to help, and sorry if I came off as impatient (more than I usually do, anyway). And I'm in favor of DIY too, which is how I do my mutual fund/IRA investing, and why I complained about how online-unfriendly life insurance is. But the idea behind "infinite banking" (basically, using a mutual whole life insurance plan, which have been around for hundreds of years and endured very hard times robustly, as a savings account) is very much DIY, once you get it set up. Again, take it with a grain of salt because I'm still researching this...
1RobinZ14y
It occurs to me: are there legal issues with people contesting wills? I think that a life insurance policy with the cryonics provider listed as the beneficiary would be more difficult to fight.
4byrnema14y
Well said. I think this is true. Cryonics being the "correct choice" doesn't just depend on correct calculations and estimates (probability of a singularity, probability of revival, etc) and a high enough sanity waterline (not dismissing opportunities out of hand because they seem strange). Whether cryonics is the correct choice also depends upon your preferences. This fact seems to be largely missing from the discussion about cryonics. Perhaps because advocates can't imagine people not valuing life extension in this way. I wouldn't pay 5 cents for a duplicate of me to exist. (Not for the sole sake of her existence, that is. If this duplicate could interact with me, or interact with my family immediately after my death, that would be a different story as I could delegate personal responsibilities to her.)
0red7514y
Well, they say that cryonics works whether you believe in it or not. Why don't give it a try?

I think cryonics is used as a rationality test because most people reason about it from within the mental category "weird far-future stuff". The arguments in the post seem like appropriate justifications for choices within that category. The rationality test is whether you can compensate for your anti-weirdness bias and realize that cryonics is actually a more logical fit for the mental category "health care".

This post, like many others around this theme, revolves around the rationality of cryonics from the subjective standpoint of a potential cryopatient, and it seems to assume a certain set of circumstances for that patient: relatively young, healthy, functional in society.

I've been wondering for a while about the rationality of cryonics from a societal standpoint, as applied to potential cryopatients in significantly different circumstances; two categories specifically stand out, death row inmates and terminal patients.

This article cites the cost of a death row inmate (over serving a life sentence) to $90K. This is a case where we already allow that society may drastically curtail an individual's right to control their own destiny. It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

As for terminal patients, this article says:

Aggressive treatments attempting to prolong life in terminally ill people typically continue far too long. Reflecting this overaggressive end-of-life treatment, the Health Care Finance Administration reported that abou

... (read more)

It would cost less to place someone in cryonic suspension than to execute him, and in so doing we would provide a chance, however small, that a wrongful conviction could be reversed in the future.

Hm, I don't think that works -- the extra cost is from the stronger degree of evidence and exhaustive appeals process required before the inmate is killed, right? If you want to suspend the inmate before those appeals then you've curtailed their right to put together a strong defence against being killed, and if you want to suspend the inmate after those appeals then you haven't actually saved any of that money.

.. or did I miss something?

4Morendil14y
Some of it is from more expensive incarceration, but you're right. This has one detailed breakdown: * Extra defense costs for capital cases in trial phase $13,180,385 * Extra payments to jurors $224,640 * Capital post-conviction costs $7,473,556 * Resentencing hearings $594,216 * Prison system $169,617 However, we're assuming that with cryonics as an option the entire process would stay the same. That needn't be the case.
9byrnema14y
Also, depending upon advances in psychology, there could be the opportunity for real rehabilitation in the future. A remorseful criminal afraid they cannot change may prefer cryopreservation.

This comment is a more fleshed-out response to VladimirM’s comment.

This is commonly supported by arguing that your thawed and revived or uploaded brain decades from now is not a fundamentally different entity from you in any way that wouldn't also apply to your present brain when it wakes up tomorrow. I actually find these arguments plausible, but the trouble is that they, in my view, prove too much.

Whether cryonics is the right choice depends on your values. There are suggestions that people who don’t think they value revival in the distant future are mislead about their real values. I think it might be the complete opposite: advocation of cryonics completely missing what it is that people value about their lives.

The reason for this mistake could be that cryonics is such a new idea that we are culturally a step or two behind in identifying what it is that we value about existence. So people think about cryonics a while and just conclude they don’t want to do it. (For example, the stories herein.) Why? We call this a ‘weirdness’ or ‘creep’ factor, but we haven’t identified the reason.

When someone values their life, what is it that they value? When we worry about dying, we wor... (read more)

3Roko14y
This comment may be a case of other-optimizing e.g. That may be what you value -- but how do you know whether that applies to me?
3byrnema14y
The 'we' population I was referring to was deliberately vague. I don't know how many people have values as described, or what fraction of people who have thought about cryonics and don't choose cryonics this would account for. My main point, all along, is that whether cryonics is the "correct" choice depends on your values. Anti-cryonics "values" can sometimes be easily criticized as rationalizations or baseless religious objections. ('Death is natural', for example.) However, this doesn't mean that a person couldn't have true anti-cryonics values (even very similar-sounding ones). Value-wise, I don't know even whether cryonics is the correct choice for much more than half or much less than half of all persons, but given all the variation in people, I'm pretty sure it's going to be the right choice for at least a handful and the wrong choice for at least a handful.
6Roko14y
Sure. If you don't value your life that much, then cryo is not for you, but I think that many people who refuse cryo don't say "I don't care if I die, my life is worthless to me", and if they were put in a near-mode situation where many of their close friends and relatives had died, but they had the option to make a new start in a society of unprecedentedly high quality of life, they wouldn't choose to die instead. Perhaps I should make an analogy: would it be rational for a medieval peasant to refuse cryo where revival was as a billionaire in contemporary society, with an appropriate level of professional support and rehab from the cryo company? She would have to be at an extreme of low-self value to say "my life without my medieval peasant friends was the only thing that mattered to me", and turn down the opportunity to live a new life of learning, comfort and life in the absence of constant pain and hunger.
3Vladimir_M14y
Roko: This is another issue where, in my view, pro-cryonics people often make unwarranted assumptions. They imagine a future with a level of technology sufficient to revive frozen people, and assume that this will probably mean a great increase in per-capita wealth and comfort, like today's developed world compared to primitive societies, only even more splendid. Yet I see no grounds at all for such a conclusion. What I find much more plausible are the Malthusian scenarios of the sort predicted by Robin Hanson. If technology becomes advanced enough to revive frozen brains in some way, it probably means that it will be also advanced enough to create and copy artificial intelligent minds and dexterous robots for a very cheap price. [Edit to avoid misunderstanding: the remainder of the comment is inspired by Hanson's vision, but based on my speculation, not a reflection of his views.] This seems to imply a Malthusian world where selling labor commands only the most meager subsistence necessary to keep the cheapest artificial mind running, and biological humans are out-competed out of existence altogether. I'm not at all sure I'd like to wake up in such a world, even if rich -- and I also see some highly questionable assumptions in the plans of people who expect that they can simply leave a posthumous investment, let the interest accumulate while they're frozen, and be revived rich. Even if your investments remain safe and grow at an immense rate, which is itself questionable, the price of lifestyle that would be considered tolerable by today's human standards may well grow even more rapidly as the Malthusian scenario unfolds.
1Roko14y
The honest answer to this question is that it is possible that you'll get revived into a world that is not worth living in, in which case you can go for suicide. And then there's a chance that you get revived into a world where you are in some terrible situation but not allowed to kill yourself. In this case, you have done worse than just dying.
4jimrandomh14y
That's a risk for regular death, too, albeit a very unlikely one. This possibility seems like Pascal's wager with a minus sign.
0Vladimir_M14y
That said, I am nowhere near certain that bad future awaits us, nor that the above mentioned Malthusian scenario is inevitable. However, it does seem to me as the most plausible course of affairs given a cheap technology for making and copying minds, and it seems reasonable to expect that such technology would follow from more or less the same breakthroughs that would be necessary to revive people from cryonics.
1Roko14y
I think that we wouldn't actually end up in a malthusian regime -- we'd coordinate so that that didn't happen. Especially compelling is the fact that in these regimes of high copy fidelity, you could end up with upload "clans" that acted as one decision-theoretic entity, and would quickly gobble up lone uploads by the power that their cooperation gave them.
1Roko14y
I think that this is the exact opposite of what Robin predicts, he predicts that if the economy grows at a faster rate because of ems, the best strategy for a human is to hold investments, which would make you fabulously rich in a very short time.
0Vladimir_M14y
That is true -- my comment was worded badly and open to misreading on this point. What I meant is that I agree with Hanson that ems likely imply a Malthusian scenario, but I'm skeptical of the feasibility of the investment strategy, unless it involves ditching the biological body altogether and identifying yourself with a future em, in which case you (or "you"?) might feasibly end up as a wealthy em. (From Hanson's writing I've seen, it isn't clear to me if he automatically assumes the latter, or if he actually believes that biological survival might be an option for prudent investors.) The reason is that in a Malthusian world of cheap AIs, it seems to me that the prices of resources necessary to keep biological humans alive would far outrun any returns on investments, no matter how extraordinary they might be. Moreover, I'm also skeptical if humans could realistically expect their property rights to be respected in a Malthusian world populated by countless numbers of far more intelligent entities.
0Roko14y
Suppose that my biological survival today costs 2000 MJ of energy per year and 5000kg of matter. Since I can spend (say) $50,000 today to buy 10,000 MJ of energy and 5000kg of matter. I invest my $50,000 and get cryo. Then, the em revolution happens, and the price of these commodities becomes very high, at the same time as the economy (total amount of wealth) grows, at say 100% per week, corrected for inflation. That means that every week, my 10000 MJ of energy and 5000kg of matter investment becomes twice as valuable, so after one week, I own 20,000MJ of energy and 10,000kg of matter. Though, at the same time, the dollar price of these commodities has also increased a lot. The end result: I get very very large amounts of energy/matter very quickly, limited only by the speed of light limit of how quickly earth-based civilization can grow. The above all assumes preservation of property rights.
0Vladimir_M14y
Roko: This is a fallacious step. The fact that risk-free return on investment over a certain period is X% above inflation does not mean that you can pick any arbitrary thing and expect that if you can afford a quantity Y of it today, you'll be able to afford (1+X/100)Y of it after that period. It merely means that if you're wealthy enough today to afford a particular well-defined basket of goods -- whose contents are selected by convention as a necessary part of defining inflation, and may correspond to your personal needs and wants completely, partly, or not at all -- then investing your present wealth will get you the power to purchase a similar basket (1+X/100) times larger after that period. [*] When it comes to any particular good, the ratio can be in any direction -- even assuming a perfect laissez-faire market, let alone all sorts of market-distorting things that may happen. Therefore, if you have peculiar needs and wants that don't correspond very well to the standard basket used to define the price index, then the inflation and growth numbers calculated using this basket are meaningless for all your practical purposes. Trouble is, in an economy populated primarily by ems, biological humans will be such outliers. It's enough that one factor critical for human survival gets bid up exorbitantly and it's adios amigos. I can easily think of more than one candidate. From the perspective of an em barely scraping a virtual or robotic existence, a surviving human wealthy enough to keep their biological body alive would seem as if, from our perspective, a whole rich continent's worth of land, capital, and resources was owned by a being whose mind is so limited and slow that it takes a year to do one second's worth of human thinking, while we toil 24/7, barely able to make ends meet. I don't know with how much confidence we should expect that property rights would be stable in such a situation. ---------------------------------------- [*] - To be precise, the con
0Roko14y
If the growth above inflation isn't defined in terms of today's standard basket of goods, then is it really growth? I mean if I defined a changing basket of goods that was the standard one up until 1991, and thereafter was based exclusively upon the cost per email of sending an email, we would see massive negative inflation and spuriously high growth rates as emails became cheaper to send due to falling computer and network costs. I.e. Robin's prediction of fast growth rates is presumably in terms of today's basket of goods, right? The point of ems is that they will do work that is useful by today's standard, rather than just creating a multiplicity of some (by our standard) useless commodity like digits of pi that they then consume.
5Vladimir_M14y
Roko: You're asking some very good questions indeed! Now think about it a bit more. Even nowadays, you simply cannot maintain the exact same basket of goods as the standard for any period much longer than a year or so. Old things are no longer produced, and more modern equivalents will (and sometimes won't) replace them. New things appear that become part of the consumption basket of a typical person, often starting as luxury but gradually becoming necessary to live as a normal, well-adjusted member of society. Certain things are no longer available simply because the world has changed to the point where their existence is no longer physically or logically possible. So what sense does it make to compare the "price index" between 2010 and 1950, let alone 1900, and express this ratio as some exact and unique number? The answer is that it doesn't make any sense. What happens is that government economists define new standard baskets each year, using formalized and complex, but ultimately completely arbitrary criteria for selecting their composition and determining the "real value" of new goods and services relative to the old. Those estimates are then chained to make comparisons between more distant epochs. While this does make some limited sense for short-term comparisons, in the long run, these numbers are devoid of any sensible meaning. Not to even mention how much the whole thing is a subject of large political and bureaucratic pressures. For example, in 1996, the relevant bodies of the U.S. government concluded that the official inflation figures were making the social security payments grow too fast for their taste, so they promptly summoned a committee of experts, who then produced an elaborate argument that the methodology hitherto used had unsoundly overstated the growth in CPI relative to some phantom "true" value. And so the methodology was revised, and inflation obediently went down. (I wouldn't be surprised if the new CPI math indeed gives much more pro
0Roko14y
Thankyou, it's a pleasure to chat with you, we should meet up in real life sometime!
0byrnema14y
I don't think it's a matter of whether you value your life but why. We don't value life unconditionally (say, just a metabolism, or just having consciousness -- both would be considered useless). I wouldn't expect anyone to choose to die, no, but I would predict some people would be depressed if everyone they cared about died and would not be too concerned about whether they lived or not. [I'll add that the truth of this depends upon personality and generational age.] Regarding the medieval peasant, I would expect her to accept the offer but I don't think she would be irrational for refusing. In fact, if she refused, I would just decide she was a very incurious person and she couldn't think of anything special to bring to the future (like her religion or a type of music she felt passionate about.) But I don't think lacking curiosity or any goals for the far impersonal future is having low self-esteem. [Later, I'm adding that if she decided not to take the offer, I would fear she was doing so due to a transient lack of goals. I would rather she had made her decision when all was well.] (If it was free, I definitely would take the offer and feel like I had a great bargain. I wonder if I can estimate how much I would pay for a cryopreservation that was certain to work? I think $10 to $50 thousand, in the case of no one I knew coming with me, but it's difficult to estimate.)

Reason #6 not to sign up: Cryonics is not compatible with organ donation. If you get frozen, you can't be an organ donor.

3Sniffnoy14y
Is that true in general, or only for organizations that insist on full-body cryo?
1CronoDAS14y
AFACT (from reading a few cryonics websites), it seems to be true in general, but the circumstances under which your brain can be successfully cryopreserved tend to be ones that make you not suitable for being an organ donor anyway.
1[anonymous]14y
Could you elaborate on that? Is cryonic suspension inherently incompatible with organ donation, even when you are going with the neuro option or does the incompatibility stem from current obscurity of cryonics? I imagine that organ harvesting could be combined with early stages of cryonic suspension if the latter was more widely practiced.
7Matt_Duing14y
The cause of death of people suitable to be organ donors is usually head trauma.
2Blueberry14y
Alternatively, that's a good reason not to sign up for organ donation. Organ donation won't increase my well-being or happiness any, while cryonics might. In addition, there's the problem that being an organ donor creates perverse incentives for your death.
3Jack14y
You get no happiness knowing there is a decent chance your death could save the lives of others? Would you turn down a donated organ if you needed one?
0Blueberry14y
It's a nice thought, I guess, but I'd rather not die in the first place. And any happiness I might get from that is balanced out by the risks of organ donation: cryonic preservation becomes slightly less likely, and my death becomes slightly more likely (perverse incentives). If people benefit from my death, they have less of an incentive to make sure I don't die. No. But I'd vote to make post-death organ donation illegal, and I'd encourage people not to donate their organs after they die. (I don't see a problem with donating a kidney while you're still alive.)
0Jack14y
Well I understand that you will be so much more happy if you avoid death for the foreseeable future that cryonics outweighs organ donation. I'm just saying that the happiness from organ donation can't be zero. The incentives seem to me so tiny as to be a laughable concern. I presume you're talking about doctors not treating you as effectively because they want your organs? Do you have this argument further developed elsewhere? It seems to me a doctor's aversion to letting someone die, fear of malpractice lawsuits and ethics boards are more than sufficient to counter whatever benefit they would get from your organs (which would be what precisely?). Like I would be more worried about the doctors not liking me or thinking I was weird because I wanted to be frozen and not working as hard to save me because of that. (ETA: If you're right there should be studies saying as much.) It seems to me legislation to punish defectors in this cooperative action problem would make sense. Organ donors should go to the top of the implant lists if they don't already. Am I right that appealing to your sense of justice regarding your defection would be a waste of time? If your arguments are right I can see how it would be a bad individual choice to be a organ donor (at least if you were signed up for cryonics). But those arguments don't at all entail that banning post-death organ donation would be the best public policy, especially since very few people will sign up for cryonics in the near future. Do you think that the perverse incentives lead to more deaths than the organs save? And from a public interest perspective an organ donor is more valuable than a frozen head. It might be in the public interest to have some representatives from our generation in the future but there is a huge economic cost to losing 20 years of work from an experienced and trained employee-- a cost which is mediated little by the economic value of a revived cryonics patient who would likely have no marketab
2magfrump14y
There was a short discussion previously about how cryonics is most useful in cases of degenerative diseases, whereas organ donation is most successful in cases of quick deaths such as due to car accidents; which is to say that cryonics and organ donation are not necessarily mutually exclusive preparations because they may emerge from mutually exclusive deaths. Though maybe not, which is why I had asked about organ donation in the first place.
-4taw14y
This is my reason I wouldn't sign up for free (and I am registered organ donor). If it wasn't for that, it would still be too expensive, all bullshit creative accounting I've seen on this site notwithstanding.
1Will_Newsome14y
Would you consider Alicorn trustworthy enough to determine whether or not the accounting is actually bullshit? She's going through the financial stuff right now, and I could ask her about any hidden fees the cryonauts on Less Wrong have been quiet about.
1Alicorn14y
Um, I'm not a good person to go to for financial advice of any kind. Mostly I'm going to shop around until I find an insurance agent who isn't creepy and wants a non-crippling sum of money.
-1taw14y
How can I estimate if Alicorn is trustworthy or not? Eliezer has been outright lying about cost of cryonics in the past.
6radical_negative_one14y
A link or explanation would be relevant here. (ETA: the link)
-1taw14y
I would link it if lesswrong had a reasonable search engine. It has not so feel free to spend an evening searching past articles about cost of cryonics. EDIT: this one
3cupholder14y
Are you using the Google sidebar to search this site? It doesn't work for me, so I'm guessing it doesn't work for you? An alternative I prefer is doing Google searches with the 'site:lesswrong.com' term; the Googlebot digs deep enough into the site that it works well.
2radical_negative_one14y
Looks like this is the previous discussion of the topic, for anyone who's interested.
0Oscar_Cunningham14y
This post maybe? http://lesswrong.com/lw/wq/you_only_live_twice/
0Kazuo_Thow14y
We would find it helpful if you could provide some insight into why you think this.

The most common objections (most of them about the infeasibility of cryopreservation) are simply wrong.

Thus triggering the common irrational inference, "If something is attacked with many spurious arguments, especially by religious people, it is probably true."

(It is probably more subtle than this - When you make argument A against X, people listen just until they think they've matched your argument to some other argument B they've heard against X. The more often they've heard B, the faster they are to infer A = B.)

Um, isn't the knowledge of many spurious arguments and no strong ones over a period of time weak evidence that no better argument exists (or at least, has currently been discovered?)

I do agree with the second part of your post about argument matching, though. The problem becomes even more serious when it is often not an argument against X from someone who takes the position, but a strawman argument they have been taught by others for the specific purposes of matching up more sophisticated arguments to.

5Nick_Tarleton14y
Yes. This is discussed well in the comments on What Evidence Filtered Evidence?.
2PhilGoetz14y
No, because that assumes that the desire to argue about a proposition is the same among rational and insane people. The situation I observe is just the opposite: There are a large number of propositions and topics that most people are agnostic about or aren't even interested in, but that religious people spend tremendous effort arguing for (circumcision, defense of Israel) or against (evolution, life extension, abortion, condoms, cryonics, artificial intelligence). This isn't confined to religion; it's a general principle that when some group of people has an extreme viewpoint, they will A) attract lots of people with poor reasoning skills, B) take opinions on otherwise non-controversial opinions based on incorrect beliefs, and C) spend lots of time arguing against things that nobody else spends time arguing against, using arguments based on the very flaws in their beliefs that make them outliers to begin with. Therefore, there is a large class of controversial issues on which one side has been argued almost exclusively by people whose reasoning is especially corrupt on that particular issue.
3JoshuaZ14y
I don't think many religious people spend "tremendous effort" arguing against life extension, cryonics or artificial intelligence. For the vast majority of the population, whether religious or not, these issues simply aren't prominent enough to think about. To be sure, when religious individuals do think about these, they more often than not seem to come down on the against side (Look at for example computer scientist David Gelernter's arguing against the possibility of AI). And that may be explainable by general tendencies in religion (especially the level at which religion promotes cached thoughts about the soul and the value of death). But even that is only true to a limited extent. For example, consider the case of life extension, if we look at Judaism, then some Orthodox ethicists have taken very positive views about life extension. Indeed, my impression is that the Orthodox are more likely to favor life extension than non-Orthodox Jews. My tentative hypothesis for this is that Orthodox Judaism places a very high value on human life and downplays the afterlife at least compared to Christianity and Islam. (Some specific strains of Orthodoxy do emphasize the afterlife a bit more (some chassidic sects for example) ). However Conservative and Reform Judaism have been more directly influenced in by Christian values and therefore have picked up a stronger connection to the Christian values and cached thoughts about death. I don't think however that this issue can be exclusively explained by Christianity, since I've encountered Muslims, neopagans, Buddhists and Hindus who have similar attitudes. (The neopagans all grew up in Christian cultures so one could say that they were being influenced by that but that doesn't hold too much ground given how much neopaganism seems to be a reaction against Christianity).
0PhilGoetz14y
All I mean to say is this: Suppose you say, "100 people have made arguments against proposition X, and all of them were bad arguments; therefore the probability of finding a good argument against X is some (monotonic increasing) function of 1/100." If X is a proposition that is particularly important to people in cult C because they believe something very strange related to X, and 90 of those 100 arguments were made by people in cult C, then you should believe that the probability of finding a good argument against X is a function of something between 1/10 and 1/100.
0RobinZ14y
This problem is endemic in the affirmative atheism community. It's a sort of Imaginary Positions error.

I told Kenneth Storey, who studies various animals that can be frozen and thawed, about a new $60M government initiative (mentioned in Wired) to find ways of storing cells that don't destroy their RNA. He mentioned that he's now studying the Gray Mouse Lemur, which can go into a low-metabolism state at room temperature.

If the goal is to keep you alive for about 10 years while someone develops a cure for what you have, then this room-temperature low-metabolism hibernation may be easier than cryonics.

(Natural cryonics, BTW, is very different from liquid-nit... (read more)

I object to many of your points, though I express slight agreement with your main thesis (that cryonics is not rational all of the time).

"Weird stuff and ontological confusion: quantum immortality, anthropic reasoning, measure across multiverses, UDTesque 'decision theoretic measure' or 'probability as preference', et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are."

This argument basically reduces to, once you remove t... (read more)

6Will_Newsome14y
No. It more accurately reduces to "we don't really know what the heck existence is, so we should worry even more about these fundamental questions and not presume their answers are inconsequential; taking precautions like signing up for cryonics may be a good idea, but we should not presume our philosophical conclusions will be correct upon reflection." Alright, but I would argue that a date of 2050 is pretty damn late. I'm very much in the 'singularity is near' crowd among SIAI folk, with 2050 as an upper bound. I suspect there are many who would also assign a date much sooner than 2050, but perhaps this was simply typical mind fallacy on my part. At any rate, your 13% is my 5%, probably not the biggest consideration in the scheme of things; but your implicit point is correct that people who are much older than us should give more pause before dismissing this very important conditional probability as irrelevant. Maybe, but a major point of this post is that it is bad epistemic hygiene to use generalizations like 'the vast majority of LW commenters' in a rhetorical argument. You and I both know many people who donate much more than 5% of their income to these kinds of organizations. But I'm talking specifically about assuming that any given argument against cryonics is stupid. Yes, correct people when they're wrong about something, and do so emphatically if need be, but do not assume because weak arguments against your idea are more common that there do not exist strong arguments that you should presume your audience does not possess. If the atmosphere is primarily based on memetics and rhetoric, than yes; but if it is founded in rationality, then the two should go hand in hand. (At least, my intuitions say so, but I could just be plain idealistic about the power of group epistemic rationality here.) It's not a separate question, it's the question I was addressing. You raised the separate question. :P What about 99% of Less Wrong readers? 99% of the people you
2Gavin14y
Death is bad. The question is whether being revived is good. I'm not sure whether or not I particularly care about the guy who gets unfrozen. I'm not sure how much more he matters to me than anyone else. Does he count as "me?" Is that a meaningful question? I'm genuinely unsure about this. It's not a decisive factor (it only adds uncertainty), but to me it is a meaningful one.

I don't know if this is a self-defense mechanism or actually related to the motives of those promoting cryonics in this group, but I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement. If the intent is to remind me that things I do may later turn out to be not just wrong, but extremely wrong, it works pretty well.

It's a good topic to explore agreement theory, as different declared-intended-rationalists have different conclusions, and can talk somewhat dispassionately about such disagreement... (read more)

I've always taken the "you're crazy not to be signed up for cryonics" meme to be intentional overstatement.

I hadn't thought of this, but if so, it's dangerous rhetoric and just begging to be misunderstood.

On a side note, speaking of "abnormal" and cryonics, apparently Britney Spears wants to sign up with Alcor: http://www.thaindian.com/newsportal/entertainment/britney-spears-wants-to-be-frozen-after-death_100369339.html

I think this can be filed under "any publicity is good publicity".

7Unnamed14y
Is there any way that we could get Britney Spears interested in existential risk mitigation?

It's not obvious that this would be good: it could very well make existential risks research appear less credible to the relevant people (current or future scientists).

6JoshuaZ14y
I was thinking of filing this as an example of Reversed stupidity is not intelligence.
1steven046114y
I'm surprised. Last time it was Paris Hilton and it turned out not to be true, but it looks like there's more detail this time.
2steven046114y
This claims it's a false rumor.
3ShardPhoenix14y
That only cites a "source close to the singer" compared to the detail given by the original rumour. However given the small prior probability of this being true I guess it's probably still more likely to be false.

I'm not sure if this is the right place to ask this or even if it is possible to procure the data regarding the same, but who is the highest status person who has opted for Cryonics? The wealthiest or the most famous..

Having high status persons adopt cryonics can be a huge boost to the cause, right?

7apophenia14y
It certainly boosts publicity, but most of the people I know of who have signed up for cryonics are either various sorts of transhumanists or celebrities. The celebrities generally seem to do it for publicity or as a status symbol. From the reactions I've gotten telling people about cryonics, I feel it has been mostly a negative social impact. I say this not because people I meet are creeped out by cryonics, but because they specifically mention various celebrities. I think if more scientists or doctors (basically, experts) opted for cryonics it might add credibility. I can only assume that lack of customers for companies like Alcor decreases the chance of surviving cryonics.
3RomanDavis14y
Uhhh... no. People developed the Urban legend about Walt Disney for a reason. It's easy to take rich, creative, ingenious, successful people and portray them as eccentric, isolated and out of touch. Think about the dissonance between "How crazy those Scientologists are" and "How successful those celebrities are." We don't want to create a similar dissonance with cryonics.
1Jack14y
It depends on the celebrity. Michael Jackson, not so helpful. But Oprah would be.

Probably my biggest concern with cryonics is that if I was to die at my age (25), it would probably be in a way where I would be highly unlikely to be preserved before a large amount of decay had already occurred. If there was a law in this country (Australia) mandating immediate cryopreservation of the head for those contracted, I'd be much more interested.

Agreed. On the other hand, in order to get laws into effect it may be necessary to first have sufficient numbers of people signed up for cryonics. In that sense, signing up for cryonics might not only save your life, it might spur changes that will allow others to be preserved better (faster), potentially saving more lives.

I get the feeling that this discussion [on various threads] is fast becoming motivated cognition aiming to reach a conclusion that will reduce social tension between people who want to sign up for cryo and people who don't. I.e. "Surely there's some contrived way we can leverage our uncertainties so that you can not sign up and still be defensibly rational, and sign up and be defensibly rational".

E.g. No interest on reaching agreement on cryo success probabilities, when this seems like an absolutely crucial consideration. Is this indicative of people who genuinely want to get to the truth of the matter?

8JoshuaZ14y
This is a valid point, but it is slightly OT to discuss precise probability for cryonics. I think that one reason people might not be trying to reach a consensus about the actual probability of success is because it may simply require so much background knowledge that one might need to be an expert to reasonably evaluate the subject. (Incidentally, I'm not aware of any sequence discussing what the proper thing to do is when one has to depend heavily on experts. We need more discussion of that.) The fact that there are genuine subject matter experts like de Magalhaes who have thought about this issue a lot and come to the conclusion that it is extremely unlikely while others who have thought about consider it likely makes it very hard to estimate. (Consider for example if someone asks me if string theory is correct. The most I'm going to be able to do is to shrug my shoulders. And I'm a mathematician. Some issues are just really much too complicated for non-experts to work out a reliable likelyhood estimate based on their own data.) It might however be useful to start a subthread discussing pro and anti arguments. To keep the question narrow, I suggest that we simply focus on the technical feasibility question, not on the probability that a society would decide to revive people. I'll start by listing a few: For: 1) Non-brain animal organs have been successfully vitrified and revived. See e.g. here 2) Humans have been revived from low-oxygen, very cold circumstances with no apparent loss of memory. This has been duplicated in dogs and other small mammals in controlled conditions for upwards of two hours. (However the temperatures reduced are still above freezing). Against: 1) Vitrification denatures and damages proteins. This may permanently damage neurons in a way that makes their information content not recoverable. If glial cells have a non-trivial role in thought then this issue becomes even more severe. There's a fair bit of circumstantial evidence for gli
1Roko14y
Yeah, this is a good list. Note Eliezer's argument that partial damage is not necessarily a problem. Also note my post: Rationality, Cryonics and Pascal's Wager.
6PhilGoetz14y
You're trying to get to the truth of a different matter. You need to go one level meta. This post is arguing that either position is plausible. There's no need to refine the probabilities beyond saying something like "The expected reward/cost ratio of signing up for cryonics is somewhere between .1 and 10, including opportunity costs."

EDIT: Nick Tarleton makes a good point in reply to this comment, which I have moved to be footnote 2 in the text.

1Nick_Tarleton14y
This distinction might warrant noting in the post, since it might not be clear that you're only criticizing one position, or that the distinction is really important to keep in mind.

As yet another media reference. I just rewatched the Star Trek TNG episode 'the neutral zone' which deals with recovery of 3 frozen humans from our time. It was really surprising to me how much disregard for human life is shown in this episode. "Why did you recover them, they were already dead". "Oh bugger, now that you revived/healed them we have to treat them as humans". Also surprising is how much insensibility in dealing with them is shown. When you awake someone from an earlier time you might send the aliens and the robots out of the room.

Question for the advocates of cryonics: I have heard talk in the news and various places that organ donor organizations are talking about giving priority to people who have signed up to donate their organs. That is to say, if you sign up to be an organ donor, you are more likely to receive a donated organ from someone else should you need one. There is some logic in that in the absence of a market in organs; free riders have their priority reduced.

I have no idea if such an idea is politically feasible (and, let me be clear, I don't advocate it), however, w... (read more)

1gregconen14y
In most cases, signing up for cryonics and signing up as an organ donor are not mutually exclusive. The manner of death most suited to organ donation (rapid brain death with (parts of) the body still in good condition, generally caused by head trauma) is not well suited to cryonic preservation. You'd probably need a directive in case the two do conflict, but such a conflict is unlikely. Alternatively, neuropreservation can, at least is theory, occur with organ donation.
1Roko14y
No, the reasoning being that by the time you're decrepit enough to be in need of an organ, you have relatively little to gain from it (perhaps 15 years of medium-low quality life), and the probability of needing an organ is low ( < 1%), whereas Cryo promises a much larger gain (thousands? of years of life) and a much larger probability of success (perhaps 10%).
0FraserOrr14y
The 15 year gain may be enough to get you over the tipping point where medicine can cure all your ails, which is to say, 15 years might buy you 1000 years. I think you are being pretty optimistic if you think the probability of success of cryonics is 10%. Obviously, no one has any data to go on for this, so we can only guess. However, there is a lot of strikes against cryonics, especially so if only your head gets frozen. In the future will they be able to recreate a whole body from head only? In the future will your cryogenic company still be in business? If they go out of business does your frozen head have any rights? If technology is designed to restore you, will it be used? Will the government allow it to be used? Will you be one of the first guinea pigs to be tested, and be one of the inevitable failures? Will anyone want an old fuddy duddy from the far past to come back to life? In the interim has there been an accident, war, malicious action by eco terrorists, that unfroze your head? And so forth. It seems to me that preserving actual life as long as possible is the best bet.
0[anonymous]14y
In those 15 years, indefinite life extension may be invented, so the calculation is less obvious than that. I haven't done any explicit calculations, but if the mid-21st century is a plausible time for such inventions, then the chances of indefinite life extension through cryonics, though non-negligible, shouldn't be of a different order of magnitude than the chances of indefinite life extension through e.g. quitting smoking or being female.

Thanks for this post. I tend to lurk, and I had some similar questions about the LW enthusiasm for cryo.

Here's something that puzzles me. Many people here, it seems to me, have the following preference order:

pay for my cryo > donation: x-risk reduction (through SIAI, FHI, or SENS) > paying for cryo for others

Of course, for the utilitarians among us, the question arises: why pay for my cryo over risk reduction? (If you just care about others way less than you care about yourself, fine.) Some answer by arguing that paying for your own cryo maximize... (read more)

4Baughn14y
I care more about myself than about others. This is what would be expected from evolution and - frankly - I see no need to alter it. Well, I wouldn't. I suspect that many people who claim they don't are mistaken, as the above preference ordering seems to illustrate. Maximize utility, yes; but utility is a subjective function, as my utility function makes explicit reference to myself.

If people believe that a technological singularity is imminent, then they may believe that it will happen before they have a significant chance of dying

This only makes sense given large fixed costs of cryonics (but you can just not make it publicly known that you've signed up for a policy, and the hassle of setting one up is small compared to other health and fitness activities) and extreme (dubious) confidence in quick technological advance, given that we're talking about insurance policies.

1Roko14y
To put it another way, if you correctly take into account structural uncertainty about the future of the world, you can't be that confident that the singularity will happen in your lifetime.
1Will_Newsome14y
Note that I did not make any arguments against the technological feasibility of cryonics, because they all suck. Likewise, and I'm going to be blunt here, all arguments against the feasibility of a singularity that I've seen also suck. Taking into account structural uncertainty around nebulous concepts like identity, subjective experience, measure, et cetera, does not lead to any different predictions around whether or not a singularity will occur (but it probably does have strong implications on what type of singularity will occur!). I mean, yes, I'm probably in a Fun Theory universe and the world is full of decision theoretic zombies, but this doesn't change whether or not an AGI in such a universe looking at its source code can go FOOM.
2CarlShulman14y
Will, the singularity argument above relies on not just the likely long-term feasibility of a singularity, but the near-certainty of one VERY soon, so soon that fixed costs like the inconvenience of spending a few hours signing up for cryonics defeat the insurance value. Note that the cost of life insurance for a given period scales with your risk of death from non-global-risk causes in advance of a singularity. With reasonable fixed costs, that means something like assigning 95%+ probability to a singularity in less than five years. Unless one has incredible private info (e.g. working on a secret government project with a functional human-level AI) that would require an insane prior.
2Will_Newsome14y
I never argued that this objection alone is enough to tip the scales in favor of not signing up. It is mostly this argument combined with the idea that loss of measure on the order of 5-50% really isn't all that important when you're talking about multiverse-affecting technologies; no, really, I'm not sure 5% of my measure is worth having to give up half a Hershey's bar everyday, when we're talking crazy post-singularity decision theoretic scenarios from one of Escher's worst nightmares. This is even more salient if those Hershey bars (or airport parking tickets or shoes or whatever) end up helping me increase the chance of getting access to infinite computational power.
0steven046114y
Wut. Is this a quantum immortality thing?
1Will_Newsome14y
No, unfortunately, much more complicated and much more fuzzy. Unfortunately it's a Pascalian thing. Basically, if post-singularity (or pre-singularity if I got insanely lucky for some reason - in which case this point becomes a lot more feasible) I get access to infinite computing power, it doesn't matter how much of my measure gets through, because I'll be able to take over any 'branches' I could have been able to reach with my measure otherwise. This relies on some horribly twisted ideas in cosmology / game theory / decision theory that will, once again, not fit in the margin. Outside view, it's over a 99% chance these ideas totally wrong, or 'not even wrong'.
0steven046114y
My understanding was in policies like Roko was describing you're not paying year by year, you're paying for a lifetime thing where in the early years you're mostly paying for the rate not to go up in later years. Is this inaccurate? If it's year by year, $1/day seems expensive on a per life basis given that the population-wide rate of death is something like 1 in 1000 for young people, probably much less for LWers and much less still if you only count the ones leaving preservable brains.
1steven046114y
How serious 0-10, and what's a decision theoretic zombie?
1Will_Newsome14y
A being that has so little decision theoretic measure across the multiverse as to be nearly non-existenent due to a proportionally infinitesimal amount of observer-moment-like-things. However, the being may have very high information theoretic measure to compensate. (I currently have an idea that Steve thinks is incorrect arguing for information theoretic measure to correlate roughly to the reciprocal of decision theoretic measure, which itself is very well-correlated with Eliezer's idea of optimization power. This is all probably stupid and wrong but it's interesting to play with the implications (like literally intelligent rocks, me [Will] being ontologically fundamental, et cetera).) I'm going to say that I am 8 serious 0-10 that I think things will turn out to really probably not add up to 'normality', whatever your average rationalist thinks 'normality' is. Some of the implications of decision theory really are legitimately weird.
1steven046114y
What do you mean by decision theoretic and information theoretic measure? You don't come across as ontologically fundamental IRL.
2Will_Newsome14y
Hm, I was hoping to magically get at the same concepts you had cached but it seems like I failed. (Agent) computations that have lower Kolmogorov complexity have greater information theoretic measure in my twisted model of multiverse existence. Decision theoretic measure is something like the significantness you told me to talk to Steve Rayhawk about: the idea that one shouldn't care about events one has no control over, combined with the (my own?) idea that having oneself cared about by a lot of agent-computations and thus made more salient to more decisions is another completely viable way of increasing one's measure. Throw in a judicious mix of anthropic reasoning, optimization power, ontology of agency, infinite computing power in finite time, 'probability as preference', and a bunch of other mumbo jumbo, and you start getting some interesting ideas in decision theory. Is this not enough to hint at the conceptspace I'm trying to convey? "You don't come across as ontologically fundamental IRL." Ha, I was kind of trolling there, but something along the lines of 'I find myself as me because I am part of the computation that has the greatest proportional measure across the multiverse'. It's one of many possible explanations I toy with as to why I exist. Decision theory really does give one the tools to blow one's philosophical foot off. I don't take any of my ideas too seriously, but collectively, I feel like they're representative of a confusion that not only I have.
0Jordan14y
If you were really the only non-zombie in a Fun Theory universe then you would be the AGI going FOOM. What could be funner than that?
1Will_Newsome14y
Yeah, that seems like a necessary plot point, but I think it'd be more fun to have a challenge first. I feel like the main character(s) should experience the human condition or whatever before they get a taste of true power, or else they'd be corrupted. First they gotta find something to protect. A classic story of humble beginnings.
2Jordan14y
Agreed. Funnest scenario is experiencing the human condition, then being the first upload to go FOOM. The psychological mind games of a transcending human. Understanding fully the triviality of human emotions that once defined you, while at the same moment modifying your own soul in an attempt to grasp onto your lingering sanity, knowing full well that the fate of the universe and billions of lives rests on the balance. Sounds like a hell of a rollercoaster.
0JoshuaZ14y
Not necessarily. Someone may for example put a very high confidence in an upcoming technological singularity but put a very low confidence on some other technologies. To use one obvious example, it is easy to see how someone would estimate the chance of a singularity in the near future to be much higher than the chance that we will have room temperature superconductors. And you could easily assign a high confidence to one estimate for one technology and not a high confidence in your estimate for another. (Thus for example, a solid state physicist might be much more confident in their estimate for the superconductors). I'm not sure what estimates one would use to reach this class of conclusion with cryonics and the singularity, but at first glance this is a consistent approach.
0Roko14y
Logical consistency, whilst admirably defensible, is way too weak a condition for a belief to satisfy before I call it rational. It is logically consistent to assign probability 1-10^-10 to the singularity happening next year.
1JoshuaZ14y
Right, but if it fits minimal logical consistency it means that there's some thinking that needs to go on. And having slept on this I can now give other plausible scenarios for someone to have this sort of position. If for example, someone puts a a high probability on a coming singularity, but they put a low probability that effective nanotech will ever be good enough to restore brain function.For example, If you believe that the vitrification procedure damages neurons in fashion that is likely to permanently erases memory, then this sort of attitude would make sense.

Not signing up for cryonics is a rationality error on my part. What stops me is an irrational impulse I can't defeat: I seem to subsonsciously value "being normal" more than winning in this particular game. It is similar to byrnema's situation with religion a while ago. That said, I don't think any of the enumerated arguments against cryonics actually work. All such posts feel like they're writing the bottom line in advance.

Quite embarrassingly, my immediate reaction was 'What? Trying to be normal? That doesn't make sense. Europeans can't be normal anyway.' I am entirely unsure as to what cognitive process managed to create that gem of an observation.

8cousin_it14y
I'm a Russian living in Moscow, so I hardly count as a European. But as perceptions of normality go, the most "normal" people in the world to me are those from the poor parts of Europe and the rich parts of the 3rd world, followed by richer Europeans (internal nickname "aliens"), followed by Americans (internal nickname "robots"). So if the scale works both ways, I'd probably look even weirder to you than the average European.
3Blueberry14y
I would love to hear more about how you see the behavior of Americans, and why you see us as "robots"!

I feel that Americans are more "professional": they can perform a more complete context-switch into the job they have to do and the rules they have to follow. In contrast, a Russian at work is usually the same slacker self as the Russian at home, or sometimes the same unbalanced work-obsessed self.

3Will_Newsome14y
What is your impression of the 'weirdness' of the Japanese culture? 'Cuz it's pretty high up there for me.
1cousin_it14y
I'm not judging culture, I'm judging people. Don't personally know anyone from Japan. Know some Filipinos and they seemed very "normal" and understandable to me, moreso than Americans.
2Will_Newsome14y
I wanted to visit Russia and Ukraine anyway, but this conversation has made me update in favor of the importance of doing so. I've never come into contact with an alien before. I've heard, however, that ex-Soviets tend to have a more live-and-let-live style of interacting with people who look touristy than, for example, Brazil or Greece, so perhaps it will take an extra effort on my part to discover if there really is a tangible aspect of alienness.

I'm new here, but I think I've been lurking since the start of the (latest, anyway) cryonics debate.

I may have missed something, but I saw nobody claiming that signing up for cryonics was the obvious correct choice -- it was more people claiming that believing that cryonics is obviously the incorrect choice is irrational. And even that is perhaps too strong a claim -- I think the debate was more centred on the probability of cyronics working, rather than the utility of it.

4Blueberry14y
If I didn't explicitly say so before: signing up for cryonics is the obvious correct choice.
4ShardPhoenix14y
At one point Eliezer was accusing literally people who don't sign their kids up for Cyronics of "child abuse".

"If you don't sign up your kids for cryonics then you are a lousy parent." - E.Y.

1ShardPhoenix14y
Yeah looks like I misremembered, but it's essentially the same thing for purposes of illustrating to the OP that some people apparently do think that cryonics is the obvious correct choice.
6cupholder14y
Literally?+"child+abuse"+cryonics+(Eliezer_Yudkowsky+OR+"Eliezer+Yudkowsky")&filter=0)
6Will_Newsome14y
Um, why would anyone vote this down? It's bad juju to put quote marks around things someone didn't actually say, especially when you disagree with the person you're mischaracterizing. Anyway, thanks for the correction, cupholder.
3ShardPhoenix14y
Oops, I knew I should have actually looked that up. The difference between "lousy parent" and "child abuse" is only a matter of degree though - Eliezer is still claiming that cryonics is obviously right, which was the point of contention.
2NancyLebovitz14y
It's a difference of degree which matters, especially since people are apt to remember insults and extreme statements.

Is it so irrational to not fear death?

Surely you aren't implying that a desire to prolong one's lifespan can only be motivated by fear.

3Paul Crowley14y
No, that could be perfectly rational, but many who claim not to fear death tend to look before crossing the road, take medicine when sick and so on.
0Vladimir_Nesov14y
It is rational for a being-who-has-no-preference-for-survival, but it's not obvious that any human however unusual or deformed can actually have this sort of preference.
1Vladimir_M14y
Lots of people demonstrate a revealed preference for non-survival by committing suicide and a variety of other self-destructive acts; others willingly choose non-survival as the means towards an altruistic (or some other sort of) goal. Or do you mean that it is not obvious that humans could lack the preference for survival even under the most favorable state of affairs?
0Vladimir_Nesov14y
Revealed preference as opposed to actual preference, what they would prefer if they were much smarter, knew much more, had unlimited time to think about it. We typically don't know our actual preference, and don't act on it.
2jasticE14y
If the actual preference is neither acted upon, nor believed in, how is it a preference?
2Vladimir_Nesov14y
It is something you won't regret giving as a goal to an obsessive world-rewriting robot that takes what you say its goals are really seriously and very literally, without any way for you to make corrections later. Most revealed preferences, you will regret, exactly for the reasons they differ from the actual preferences: on reflection, you'll find that you'd rather go with something different. See also this thread.
2jasticE14y
That definition may be problematic in respect to life-and-death decisions such as cryonics: Once I am dead, I am not around to regret any decision. So any choice that leads to my death could not be considered bad. For instance, I will never regret not having signed up for cryonics. I may however regret doing it if I get awakened in the future and my quality of life is too low. On the other hand, I am thinking about it out of sheer curiosity for the future. Thus, signing up would simply help me increasing my current utility by having a hope of more future utility. I am just noticing, this makes the decision accessible to your definition of preference again, by posing the question to myself: "If I signed up for cryonics today, would I regret the [cost of the] decision tomorrow?"
2Vladimir_M14y
This, however, is not the usual meaning of the term "preference." In the standard usage, this word refers to one's favored option in a given set of available alternatives, not to the hypothetical most favorable physically possible state of the world (which, as you correctly note, is unlikely to be readily imaginable). If you insist on using the term with this meaning, fair enough; it's just that your claims sound confusing when you don't include an explanation about your non-standard usage. That said, one problem I see with your concept of preference is that, presumably, the actions of the "obsessive world-rewriting robot" are supposed to modify the world around you to make it consistent with your preferences, not to modify your mind to make your preferences consistent with the world. However, it is not at all clear to me whether a meaningful boundary between these two sorts of actions can be drawn.
0Nick_Tarleton14y
Preference in this sense is a rigid designator, defined over the world but not determined by anything in the world, so modifying my mind couldn't make my preference consistent with the world; a robot implementing my preference would have to understand this.
2ata14y
As with most (all?) questions of whether an emotion is rational, it depends on what you value and what situation you're facing. If you can save a hundred lives by risking yours, and there's no less risky way nor (hypothetically) any way for you to save more people by other means while continuing to live, and you want to save lives, and if fear of death would stop you from going through with it, then it's irrational to fear death in that case. But in general, when you're not in a situation like that, you should feel as strongly as necessary whatever emotion best motivates you to keep living and avoid things that would stop you from living (assuming you like living). Whether that's fear of death or love of life or whatever else, feel it. If you're talking about "fear of death" as in constant paranoia over things that might kill you, then that's probably irrational for most people's purposes. Or if you're not too attached to being alive, then it's not too irrational to not fear death, though that's an unfortunate state of affairs. But for most people, generally speaking, I don't see anything irrational about normal levels of fear of death.
5Vladimir_Nesov14y
(Keeping in mind the distinction between believing that you are not too attached to being alive and actually not having a strong preference for being alive, and the possibility of the belief being incorrect.)
-1Vladimir_Nesov14y
Yes, it seems to be irrational, even if you talk about fear in particular and not preferring-to-avoid in general. (See also: Emotion, Reversal test.)
2Jowibou14y
Since I can see literally nothing to fear in death - in nonexistence itself - I don't really understand why cryonics is seen by so many here as such an essentially "rational" choice. Isn't a calm acceptance of death's inevitability preferable to grasping at a probably empty hope of renewed life simply to mollify one's instinct for survival? I live and value my life, but since post-death I won't be around to feel one way or another about it, I really don't see why I should not seek to accept death rather than counter it. In its promise of "eternal" life, cryonics has the whiff of religion to me.
6Morendil14y
It's certainly best to accept that death is inevitable if you know for a fact that death is inevitable. Which emotion should accompany that acceptance (calm, depression, etc.) depends on particular facts about death - and perhaps some subjective evaluation. However, the premise seems very much open to question. Death is not "inevitable", it strikes me as something very much evitable, that is which "can be avoided". People used to die when their teeth went bad: dental care has provided ways to avoid that kind of death. People used to die when they suffered infarctus, the consequences of which were by and large unavoidable. Fibrillators are a way to avoid that. And so on. Historically, every person who ever lived has died before reaching two hundred years of age; but that provides no rational grounds for assuming a zero probability that a person can enjoy a lifespan vastly exceeding that number. Is it "inevitable" that my life shall be confined to a historical lifespan? Not (by definition) if there is a way to avoid it. Is there a way to avoid it? Given certain reasonable assumptions as to what consciousness and personal identity consist of, there could well be. I am not primarily the cells in my body, I am still me if these cells die and get replaced by functional equivalents. I suspect that I am not even primarily my brain, i.e. that I would still be me if the abstract computation that my brain implements were reproduced on some other substrate. This insight - "I am a substrate independent computation" - builds on relatively recent scientific discoveries, so it's not surprising it is at odds with historical culture. But it certainly seems to undermine the old saw "death comes to all". Is it rational to feel hopeful once one has assigned substantial probability to this insight being correct? Yes. The corollary of this insight is that death, by which I mean information theoretical death (which historically has always followed physical death) holds no particular
1Jowibou14y
Good arguments and I largely agree. However postponable does not equal evitable. At some point any clear minded self (regardless of the substratum) is probably going to have to come to accept that it is either going to end or be transformed to the point where definition of the word "self" is getting pretty moot. I guess my point remains that post-death nonexistence contains absolute zero horrors in any case. In a weirdly aesthetic sense, the only possible perfect state is non-existence. To paraphrase Sophocles, perhaps the best thing is never to have been born at all. Now given a healthy love of life and a bit of optimism it feels best to soldier on, but to hope really to defeat death is a delusional escape from the mature acceptance of death. None of those people who now survive their bad teeth or infarctus have had their lives "saved" (an idiotic metaphor) merely prolonged. Now if that's what you want fine - but it strikes me as irrational as a way to deal with death itself.
1Morendil14y
Let's rephrase this with the troublesome terms unpacked as per the points you "largely agree" with: "to hope for a life measured in millenia is a delusional escape from the mature acceptance of a hundred-year lifespan". In a nutshell: no! Hoping to see a hundred was not, in retrospect, a delusional escape from the mature acceptance of dying at fourty-something which was the lot of prehistoric humans. We don't know yet what changes in technology are going to make the next "normal" lifespan, but we know more about it than our ancestors did. I can believe that it strikes you as weird, and I understand why it could be so. A claim that some argument is irrational is a stronger and less subjective claim. You need to substantiate it. Your newly introduced arguments are: a) if you don't die you will be transformed beyond any current sense of identity, and b) "the only possible perfect state is non-existence". The latter I won't even claim to understand - given that you choose to continue this discussion rather than go jump off a tall building I can only assume your life isn't a quest for a "perfect state" in that sense. As to the former, I don't really believe it. I'm reasonably certain I could live for millenia and still choose, for reasons that belong only to me, to hold on to some memories from (say) the year 2000 or so. Those memories are mine, no one else on this planet has them, and I have no reason to suppose that someone else would choose to falsely believe the memories are theirs. I view identity as being, to a rough approximation, memories and plans. Someone who has (some of) my memories and shares (some of) my current plans, including plans for a long and fun-filled life, is someone I'd identify as "me" in a straightforward sense, roughly the same sense that I expect I'll be the same person in a year's time, or the same sense that makes it reasonable for me to consider plans for my retirement.
2Jowibou14y
Perhaps my discomfort with all this is in cryogenic's seeming affinity with the sort of fear mongering about death that's been the bread and butter of religion for millennia. It just takes it as a fundamental law of the universe that life is better than non life - not just in practice, not just in terms of our very real, human, animal desire to survive (which I share) - but in some sort of essential, objective, rational, blindingly obvious way. A way that smacks of dogma to my ears. If you really want to live for millennia, go ahead. Who knows I might decide to join you. But in practice I think cryonics for many people is more a matter of escaping death, of putting our terrified, self-centered, hubristic fear of mortality at the disposal of another dubious enterprise. As for my own view of "identity": I see it as a kind of metapattern, a largely fictional story we tell ourselves about the patterns of our experience as actors, minds and bodies. I can't quite bring myself to take it so seiously that I'm willing to invest in all kinds of extraordinary measures aimed at its survival. If I found myself desperately wanting to live for millennia, I'd probably just think "for chrissakes get over yourself".
9Will_Newsome14y
Please, please, please don't let the distaste of a certain epistemic disposition interfere with a decision that has a very clear potential for vast sums of positive or negative utility. Argument should screen off that kind of perceived signaling. Maybe it's true that there is a legion of evil Randian cryonauts that only care about sucking every last bit out of their mortal lives because the Christian background they've almost but not quite forgotten raised them with an almost pitiable but mostly contemptible fear of death. Folks like you are much more enlightened and have read up on your Hofstadter and Buddhism and Epicureanism; you're offended that these death-fearing creatures that are so like you didn't put in the extra effort to go farther along the path of becoming wiser. But that shouldn't matter: if you kinda sorta like living (even if death would be okay too), and you can see how cryonics isn't magical and that it has at least a small chance of letting you live for a long time (long enough to decide if you want to keep living, at least!), then you don't have to refrain from duly considering those facts out of a desire to signal distaste for the seemingly bad epistemic or moral status of those who are also interested in cryonics and the way their preachings sound like the dogma of a forgotten faith. Not when your life probabilistically hangs in the balance. (By the way, I'm not a cryonaut and don't intend to become one; I think there are strong arguments against cryonics, but I think the ones you've given are not good.)
0NancyLebovitz14y
I'm not so sure that if it's possible to choose to keep specific memories, then it will be impossible to record and replay memories from one person to another. It might be a challenge to do so from one organic brain to another, it seems unlikely to be problematic between uploads of different people unless you get Robin Hanson's uneditable spaghetti code upolads. There still might be some difference in experiencing the memory because different people would notice different things in it.
2Morendil14y
Perhaps "replay" memories has the wrong connotations - the image it evokes for me is that of a partly transparent overlay over my own memories, like a movie overlaid on top of another. That is too exact. What I mean by keeping such memories is more like being able, if people ask me to tell them stories about what it was like back in 2010, to answer somewhat the same as I would now - updating to conform to the times and the audience. This is an active process, not a passive one. Next year I'll say things like "last year when we were discussing memory on LW". In ten years I might say "back in 2010 there was this site called LessWrong, and I remember arguing this and that way about memory, but of course I've learned a few things since so I'd now say this other". In a thousand years perhaps I'd say "back in those times our conversations took place in plain text over Web browsers, and as we only approximately understood the mind, I had these strange ideas about 'memory' - to use a then-current word". Keeping a memory is a lot like passing on a story you like. It changes in the retelling, though it remains recognizable.
4Vladimir_Nesov14y
Apply this argument to drug addiction: "I value not being an addict, but since post-addiction I will want to continue experiencing drugs, and I-who-doesn't-want-to-be-an-addict won't be around, I really don't see why I should stay away from becoming an addict". See the problem? Your preferences are about the whole world, with all of its past, present and future, including the time when you are dead. These preferences determine your current decisions; the preferences of future-you or of someone else are not what makes you make decisions at present.
-1Jowibou14y
I suppose I'd see your point if I believed that drug addiction was inevitable and knew that everyone in the history of everything had eventually become a drug addict. In short, I'm not sure the analogy is valid. Death is a special case, especially since "the time when you are dead" is from one's point of view not a "time" at all. It's something of an oxymoron. After death there IS no time - past present or future.
3Vladimir_Nesov14y
Whether something is inevitable is not an argument about its moral value. Have you read the reversal test reference? Please believe in physics.
0Jowibou14y
1) Who said anything about morality? I'm asking for a defence of the essential rationality of cryogenics. 2) Please read the whole paragraph and try to understand subjective point of view - or lack thereof post-death. (Which strikes me as the essential point of reference when talking about fear of death)
2Vladimir_Nesov14y
See What Do We Mean By "Rationality"?. When you ask about a decision, its rationality is defined by how well it allows to achieve your goals, and "moral value" refers to the way your goals evaluate specific options, with the options of higher "moral value" being the same as options preferred according to your goals. Consider the subjective point of view of yourself-now, on the situation of yourself dying, or someone else dying for that matter, not the point of view of yourself-in-the-future or subjective point of view of someone-else. It's you-now that needs to make the decision, and rationality of whose decisions we discuss.
0Jowibou14y
Clearly, I'm going to need to level up about this. I really would like to understand it in a satisfactory way; not just play a rhetorical game. That said the phrase "the situation of yourself dying" strikes me as an emotional ploy. The relevant (non)"situation" is complete subjective and objective non-existence, post death. The difficulty and pain etc of "dying" is not at issue here. I will read your suggestions and see if I can reconcile all this. Thanks.
2Vladimir_Nesov14y
This wasn't my intention. You can substitute that phrase with, say, "Consider the subjective point of view of yourself-now, on the situation of yourself being dead for a long time, or someone else being dead for a long time for that matter." The salient part was supposed to be the point of view, not what you look at from it.
-1Jowibou14y
Fair enough but I still think think that the "situation of yourself being dead" is still ploy-like in that it imagines non-existence as a state or situation rather than an absence of state or situation. Like mistaking a map for an entirely imaginary territory.
1Vladimir_Nesov14y
You can think about a world that doesn't contain any minds, and yours in particular. The property of a world to not contain your mind does not say "nothing exists in this world", it says "your mind doesn't exist in this world". Quite different concepts.
0Jowibou14y
Of course I can think about such a world. Where people get into trouble is where they think of themselves as "being dead" in such a world rather than simply "not being" i.e. having no more existence than anything else that doesn't exist. It's a distinction that has huge implications and rarely finds its way into the discussion. No matter how rational people try to be, they often seem to argue about death as if it were a state of being - and something to be afraid of.
1Vladimir_Nesov14y
I give up for now, and suggest reading the sequences, maybe in particular the guide to words and map-territory.
0Jowibou14y
Clearly some of my underlying assumptions are flawed. There's no doubt I could be more rigorous in my use of the terminology. Still, I can't help but feel that some of the concepts in the sequences obfuscate as much as they clarify on this issue. Sorry if I have wasted your time. Thanks again for trying.
-2timtyler14y
Re: "Is it so irrational to not fear death?" Fear of death should be "managable": http://en.wikipedia.org/wiki/Terror_management_theory#Criticism

I am not liking long term cryonics for the following reasons: 1) If an unmodified Violet would be revived she would not be happy in the far future 2) If a Violet modified enough would be revived she would not be me 3) I don't place a large value on there being a "Violet" in the far future 4) There is a risk of my values and the values of being waking Violet up being incompatible, and avoiding possible "fixing" of brain is very high priority 5) Thus I don't want to be revived by far-future and death without cryonics seems a safe way for that

0DSimon14y
What makes you sure of this?

Just noting that buried in the comments Will has stated that he thinks the probability that cryo will actually save your life is one in a million -- 10^-6 -- (with some confusion surrounding the technicalities of how to actually assign that and deal with structural uncertainty).

I think that we need to iron out a consensus probability before this discussion continues.

Edit: especially since if this probability is correct, then the post no longer makes sense...

4Will_Newsome14y
Correction: not 'you', me specifically. I'm young, phyisically and psychologically healthy, and rarely find myself in situations where my life is in danger (the most obvious danger is of course car accidents). It should also be noted that I think a singularity is a lot nearer than your average singularitarian, and think the chance of me dying a non-accidental/non-gory death is really low. I'm afraid that 'this discussion' is not the one I originally intended with this post: do you think it is best to have it here? I'm afraid that people are reading my post as taking a side (perhaps due to a poor title choice) when in fact it is making a comment about the unfortunate certainty people seem to consistently have on both sides of the issue. (Edit: Of course, this post does not present arguments for both sides, but simply attempts to balance the overall debate in a more fair direction.)
0Roko14y
Indeed, perhaps not the best place to discuss. But it is worth thinking about this as it does make a difference to the point at issue.
0Will_Newsome14y
Should we nominate a victim to write a post summarizing various good points either for or against signing up for cryonics (not the feasibility of cryonics technologies!) while taking care to realize that preferences vary and various arguments have different weights dependent on subjective interpretations? I would love to nominate Steve Rayhawk because it seems right up his ally but I'm afraid he wouldn't like to be spotlighted. I would like to nominate Steven Kaas if he was willing. (Carl Shulman also comes to mind but I suspect he's much too busy.)
0steven046114y
(edit) I guess I don't fully understand how the proposed post would differ from this one (doesn't it already cover some of the "good points against" part?), and I've also always come down on the "no" side more than most people here.
0Will_Newsome14y
I think I missed some decent points against (one of which is yours) and the 'good arguments for' do not seem to have been collected in a coherent fashion. If they were in the same post, written by the same person, then there's less of a chance that two arguments addressing the same point would talk past each other. I think that you wouldn't have to suggest a conclusion, and could leave it completely open to debate. I'm willing to bet most people will trust you to unbiasedly and effectively put forth the arguments for both sides. (I mean, what with that great quote about reconstruction from corpses and all.)
1PhilGoetz14y
I don't think so - the points in the post stand regardless of the probability Will assigns. Bringing up other beliefs of Will is an ad hominem argument. Ad hominem is a pretty good argument in the absence of other evidence, but we don't need to go there today.
2Roko14y
It wasn't intended as an ad-hom argument. The point is simply that if people have widely varying estimates of how likely cryo is to work (0.000001 versus say 0.05 for Robin Hanson and say 0.1 for me), we should straighten those out before getting on to other stuff, like whether it is plausible to rationally reject it. It just seems silly to me that the debate goes on in spite of no effort to agree on this crucial parameter. If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.
3Will_Newsome14y
Once again, my probability estimate was for myself. There are important subjective considerations, such as age and definition of identity, and important sub-disagreements to be navigated, such as AI takeoff speed or likelihood of Friendliness. If I was 65 years old, and not 18 like I am, and cared a lot about a very specific me living far into the future, which I don't, and believed that a singularity was in the distant future, instead of the near-mid future as I actually believe, then signing up for cryonics would look a lot more appealing, and might be the obviously rational decision to make.
1Roko14y
Most people who are considering cryo here are within 10 years of your age. In particular, I am only 7 years older. 7 years doesn't add up to moving from 0.0000001 to 0.1, so one of us has a false belief.
2Will_Newsome14y
What?! Roko, did you seriously not see the two points I had directly after the one about age? Especially the second one?! How is my lack of a strong preference to stay alive into the distant future a false preference? Because it's not a false belief.
3Roko14y
I agree with you that not wanting to be alive in the distant future is a valid reason to not sign up for cryo, and I think that if that's what you want, then you're correct to not sign up.
0Will_Newsome14y
Okay. Like I said, the one in a million thing is for myself. I think that most people, upon reflection (but not so much reflection as something like CEV requires), really would like to live far into the future, and thus should have probabilities much higher than 1 in a million.
1Roko14y
How is the probability dependent upon whether you want to live into the future? Surely either you get revived or not? Or do you mean something different than I do by this probability? Do you mean something different than I do by the term "probability"?
0Will_Newsome14y
We were talking about the probability of getting 'saved', and 'saved' to me requires that the future is suited such that I will upon reflection be thankful that I was revived instead of those resources being used for something else I would have liked to happen. In the vast majority of post-singularity worlds I do not think this will be the case. In fact, in the vast majority of post-singularity worlds, I think cryonics becomes plain irrelevant. And hence my sorta-extreme views on the subject. I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote.
0Roko14y
Oh, I see, my bad, apologies for the misunderstanding. In which case, I ask: what is your probability that if you sign up for cryo now, you will be cryopreserved and revived (i.e. that your brain-state will be faithfully restored)? (This being something that you and I ought to agree on, and ought to be roughly the same replacing "Will" with "Roko")
0Will_Newsome14y
Cool, I'm glad to be talking about the same thing now! (I guess any sort of misunderstanding/argument causes me a decent amount of cognitive burden that I don't realize was there until after it is removed. Maybe a fear of missing an important point that I will be embarrassed about having ignored upon reflection. I wonder if Steve Rayhawk experiences similar feelings on a normal basis?) Well here's a really simple, mostly qualitative analysis, with the hope that "Will" and "Roko" should be totally interchangeable. Option 1: Will signs up for cryonics. * uFAI is developed before Will is cyopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway. * uFAI is developed after Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway. * FAI is developed before Will is cryopreserved. Signing up for cryonics never gets a chance to work for Will specifically. * FAI is developed after Will is cryopreserved. Cryonics might work, depending on the implementation and results of things like CEV. This is a huge question mark for me. Something close to 50% is probably appropriate, but at times I have been known to say something closer to 5%, based on considerations like 'An FAI is not going to waste resources reviving you: rather, it will spend resources on fulfilling what it expects your preferences probably were. If your preferences mandate you being alive, then it will do so, but I suspect that most humans upon much reflection and moral evolution won't care as much about their specific existence.' Anna Salamon and I think Eliezer suspect that personal identity is closer to human-ness than e.g. Steve Rayhawk and I do, for what it's worth. * An existential risk occurs before Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway. * An existential ris
2Roko14y
You could still actually give a probability that you'll get revived. Yes, I agree that knowing what the outcome of AGI is is extremely important, but you should still just have a probability for that.
0Will_Newsome14y
Well, that gets tricky, because I have weak subjective evidence that I can't share with anyone else, and really odd ideas about it, that makes me think that an FAI is the likely outcome. (Basically, I suspect something sorta kinda a little along the lines of me living in a fun theory universe. Or more precisely, I am a sub-computation of a longer computation that is optimized for fun, so that even though my life is sub-optimal at the moment I expect it to get a lot better in the future, and that the average of the whole computation's fun will turned out to be argmaxed. Any my life right now rocks pretty hard anyway. I suspect other people have weaker versions of this [with different evidence from mine] with correspondingly weaker probability estimates for this kind of thing happening.) So if we assume with p=1 that a positive singularity will occur for sake of ease, that leaves about 2% that cryonics will work (5% that an FAI raises the cryonic dead minus 3% that an FAI raises all the dead) if you die times the probability that you die before the singularity (about 15% for most people [but about 2% for me]) which leads to 0.3% as my figure for someone with a sense of identity far stronger than me, Kaj, and many others, who would adjust downward from there (an FAI can be expected to extrapolate our minds and discover it should use the resources on making 10 people with values similar to ourself instead, or something). If you say something like 5% positive singularity instead, then it comes out to 0.015%, or very roughly 1 in 7000 (although of course your decision theory should discount worlds in which you die no matter what anyway, so that the probability of actually living past the singularity shouldn't change your decision to sign up all that much). I suspect someone with different intuitions would give a very different answer, but it'll be hard to make headway in debate because it really is so non-technical. The reason I give extremely low probabilities for myself
0Vladimir_Nesov14y
Hmm... Seems like crazy talk to me. It's your mind, tread softly.
0Will_Newsome14y
The ideas about fun theory are crazy talk indeed, but they're sort of tangential to my main points. I have much crazier ideas peppered throughout the comments of this post (very silly implications of decision theory in a level 4 multiverse that are almost assuredly wrong but interesting intuition pumps) and even crazier ideas in the notes I write to myself. Are you worried that this will lead to some sort of mental health danger, or what? I don't know how often high shock levels damage one's sanity to an appreciable degree.
1Vladimir_Nesov14y
It's not "shock levels" which are a problem, it's working in the "almost assuredly wrong" mode. If you yourself believe ideas you develop to be wrong, are they knowledge, are they progress? Do crackpots have "damaged sanity"? It's usually better to develop ideas on as firm ground as possible, working towards the unknown from statements you can rely on. Even in this mode will you often fail, but you'd be able to make gradual progress that won't be illusory. Not all questions are ready to be answered (or even asked).
0Roko14y
98% certain that the singularity will happen before you die (which could easily be 2070)? This seems like an unjustifiably high level of confidence.
0Will_Newsome14y
For what it's worth the uncertain future application gives me 99% chance of a singularity before 2070 if I recall correctly. The mean of my distrubution is 2028. I really wish more SIAI members talked to each other about this! Estimates vary wildly, and I'm never sure if people are giving estimates taking into account their decision theory or not (that is, thinking 'We couldn't prevent a negative singularity if it was to occur in the next 10 years, so let's discount those worlds and exclude them from our probability estimates'.) I'm also not sure if people are giving far-off estimates because they don't want to think about the implications otherwise, or because they tried to build an FAI and it didn't work, or because they want to signal sophistication and sophisticated people don't predict crazy things happening very soon, or because they are taking an outside view of the problem, or because they've read the recent publications at the AGI conferences and various journals, thought about advances that need to be made, estimated the rate of progress, and determined a date using the inside view (like Steve Rayhawk who gives a shorter time estimate than anyone else, or Shane Legg who I've heard also gives a short estimate but I am not sure about that, or Ben Goertzel who I am again not entirely sure about, or Juergen Schmidhuber who seems to be predicting it soonish, or Eliezer who used to have a soonish estimate with very wide tails but I have no idea what his thoughts are now). I've heard the guys at FHI also have distant estimates, and a lot of narrow AI people predict far-off AGI as well. Where are the 'singularity is far' people getting their predictions?
0Roko14y
UF is not accurate!
0Will_Newsome14y
True. But the mean of my distribution is still 2028 regardless of the inaccuracy of UF.
1Roko14y
The problem with the uncertain future is that it is a model of reality which allows you to play with the parameters of the model, but not the structure. For example, it has no option for "model uncertainty", e.g. the possibility that the assumptions it makes about forms of probability distributions are incorrect. And a lot of these assumptions were made for the sake of tractability rather than realism. I think that the best way to use it is as an intuition pump for your own model, which you could make in excel or in your head. Giving probabilities of 99% is a classic symptom of not having any model uncertainty.
0Will_Newsome14y
If Nick and I write some more posts I think this would be the theme. Structural uncertainty is hard to think around. Anyway, I got my singularity estimations by listening to lots of people working at SIAI and seeing whose points I found compelling. When I arrived at Benton I was thinking something like 2055. It's a little unsettling that the more arguments I hear from both sides the nearer in the future my predictions are. I think my estimates are probably too biased towards Steve Rayhawk's, but this is because everyone else's estimates seem to take the form of outside view considerations that I find weak.
0Roko14y
This seems to rely on your idea that, on reflection, humans probably don't care about themselves, i.e. if I reflected sufficiently hard, I would place zero terminal value on my own life. I wonder how you're so confident about this? Like, 95% confident that all humans would place zero terminal value their own lives? Note also that it is possible that some but not all people would, on reflection, place zero value on their own lives.
0Will_Newsome14y
Not even close to zero, but less terminal value than you would assign to other things that an FAI could optimize for. I'm not sure how much extrapolated unity of mankind there would be on this regard. I suspect Eliezer or Anna would counter my 5% with a 95%, and I would Aumann to some extent, but I was giving my impression and not belief. (I think that this is better practice at the start of a 'debate': otherwise you might update on the wrong expected evidence. EDIT: To be more clear, I wouldn't want to update on Eliezer's evidence if it was some sort of generalization from fictional evidence from Brennan's world or something, but I would want to update if he had a strong argument that identity has proven to be extremely important to all of human affairs since the dawn of civilization, which is entirely plausible.)
0Roko14y
It seems odd to me that out of the 10^40 atoms in the solar system, there would not be any left to revive cryo patients. My impression is that FAI would revive cryo patients, with probability 80%, the remaining 20% being for very odd scenarios that I just can't think of.
2Will_Newsome14y
I guess I'm saying the amount of atoms it takes to revive a cryo patient is vastly more wasteful than its weight in computronium. You're trading off one life for a huge amount of potential lives. A few people, like Alicorn if I understand her correctly, think that people who are already alive are worth a huge number of potential lives, but I don't quite understand that intuition. Is this a point of disagreement for us?
2Roko14y
Yeah, but the cryo patient could be run in software rather than in hardware, which would mean that it would be a rather insignificant amount of extra effort.
0Will_Newsome14y
Gah, sorry, I keep leaving things out. I'm thinking about the actual physical finding out where cryo patients are, scanning their brains, repairing the damage, and then running them. Mike Blume had a good argument against this point: proportionally, the startup cost of scanning a brain is not much at all compared to the infinity of years of actually running the computation. This is where I should be doing the math... so I'm going to think about it more and try and figure things out. Another point is that an AGI could gain access to infinite computing power in finite time during which it could do everything, but I think I'm just confused about the nature of computations in a Tegmark multiverses here.
1Roko14y
I hadn't thought of that; certainly if the AI's mission was to run as many experience-moments as possible in the amount of space-time-energy it had, then it wouldn't revive cryo patients. Note that the same argument says that it would kill all existing persons rather than upload them, and re-use their mass and energy to run ems of generic happy people (maximizing experience moments without regard to any deontological constraints has some weird implications...)
0Will_Newsome14y
Yes, but this makes people flustered so I prefer not to bring it up as a possibility. I'm not sure if it was Bostrom or just generic SIAI thinking where I heard that an FAI might deconstruct us in order to go out into the universe, solve the problem of astronomical waste, and then run computations of us (or in this case generic transhumans) far in the future.
0Roko14y
Of course at this point, the terminology "Friendly" becomes misleading, and we should talk about a Goal-X-controlled-AGI, where Goal X is a variable for the goal that that AGI would optimize for. There is no unique value for X. Some have suggested the output of CEV as the goal system, but if you look at CEV in detail, you see that it is jam-packed with parameters, all of which make a difference to the actual output. I would personally lobby against the idea of an AGI that did crazy shit like killing existing people to save a few nanoseconds.
1Will_Newsome14y
Hm, I've noticed before that the term 'Friendly' is sort of vague. What would I call an AI that optimizes strictly for my goals (and if I care about others' goals, so be it)? A Will-AI? I've said a few times 'your Friendly is not my Friendly' but I think I was just redefining Friendliness in an incorrect way that Eliezer wouldn't endorse.
2Douglas_Knight14y
One could say "Friendly towards Will." But the problem of nailing down your goals seems to me much harder than the problem of negotiating goals between different people. Thus I don't see a problem of being vague about the target of Friendliness.
1Vladimir_Nesov14y
Agreed. And asking the question of what is preference of a specific person, represented in some formal language, seems to be a natural simplification of the problem statement, something that needs to be understood before the problem of preference aggregation can be approached.
2Roko14y
Beware of the urge to censor thoughts that disagree with authority. I personally agree that there is a serious issue here -- the issue of moral antirealism, which implies that there is no "canonical human notion of goodness", so the terminology "Friendly AI" is actually somewhat misleading, and it might be better to say "average human extrapolated morality AGI" when that's what we want to talk about, e.g. Then it sounds less onerous to say that you disagree with what an average human extrapolated morality AGI would do than that you disagree with what a "Friendly AI" would do, because most people on this forum disagree with averaged-out human morality (for example, the average human is a theist). Contrast:
0Vladimir_Nesov14y
"Friendly AI" is about as specific/ambiguous as "morality" - something humans mostly have in common, allowing for normal variation, not referring to details about specific people. As with preference (morality) of specific people, we can speak of FAI optimizing the world to preference of specific people. Naturally, for each given person it's preferable to launch a personal-FAI to a consensus-FAI.
2jimrandomh14y
I am reasonably confident that no such process can produce an entity that I would identify as myself. Being reconstructed from other peoples' memories means losing the memories of all inner thoughts, all times spent alone, and all times spent with people who have died or forgotten the occasion. That's too much lost for any sort of continuity of consciousness.
1Will_Newsome14y
Hm, well we can debate the magic powers a superintelligence possesses (whether or not it can raise the dead), but I think this would make Eliezer sad. I for one am not reasonably confident either way. I am not willing to put bounds on an entity that I am not sure won't get access to an infinite amount of computation in finite time. At any rate, it seems we have different boundaries around identity. I'm having trouble removing the confusion about identity from my calculations.
0Roko14y
You suspect that most people, upon reflection, won't care whether they live or die? I'm intrigued: what makes you think this?
0Vladimir_Nesov14y
Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be. You are not really trying to figure out "How likely is it to survive as a result of signing up?", that's just an instrumental question that is supposed to be helpful, you are trying to figure out which decision you should make.
0Will_Newsome14y
Simply wrong. I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you? EDIT: Ah, I think I know what happened: Roko and I were talking about the probability of me being 'saved' by cryonics in the thread he linked to, but perhaps you missed that. Let me copy/paste something I said from this thread: "I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote." I don't think I emphasized this enough. My apologies. (I feel silly, because without this distinction you've probably been thinking I've been committing the mind projection fallacy this whole time, and I didn't notice.) Not sure I'm parsing this right. Yes, I am determining what decision I should make. The instrumental question is a part of that, but it is not the only consideration.
0Vladimir_Nesov14y
You haven't misunderstood me, but you need to pay attention to this question, because it's more or less a consensus on Less Wrong that your position expressed in the above quote is wrong. You should maybe ask around for clarification of this point, if you don't get a change of mind from discussion with me. You may try the metaethics sequence, and also/in particular these posts: * http://lesswrong.com/lw/s6/probability_is_subjectively_objective/ * http://lesswrong.com/lw/si/math_is_subjunctively_objective/ * http://lesswrong.com/lw/sj/does_your_morality_care_what_you_think/ * http://lesswrong.com/lw/sw/morality_as_fixed_computation/ * http://lesswrong.com/lw/t0/abstracted_idealized_dynamics/ That preference is computed in the mind doesn't make it any less of territory than anything else. This is just a piece of territory that happens to be currently located in human minds. (Well, not quite, but to a first approximation.) Your map may easily change even if the territory stays the same. This changes your belief, but this change doesn't influence what's true about the territory. Likewise, your estimate of how good situation X is may change, once you process new arguments or change your understanding of the situation, for example by observing new data, but that change of your belief doesn't influence how good X actually is. Morality is not a matter of interpretation.
0Will_Newsome14y
Before I spend a lot of effort trying to figure out where I went wrong (which I'm completely willing to do, because I read all of those posts and the metaethics sequence and figured I understood them), can you confirm that you read my EDIT above, and that the misunderstanding addressed there does not encompass the problem?
0Vladimir_Nesov14y
Now I have read the edit, but it doesn't seem to address the problem. Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection". If you expect to believe something, you should just believe it right away. See Conservation of expected evidence. But then, "probability this decision is right" is not something you can use for making the decision, not directly.
0Nick_Tarleton14y
This might not be the most useful concept, true, but the issue at hand is the meta-level one of people's possible overconfidence about it.
3Vladimir_Nesov14y
"Probability of signing up being good", especially obfuscated with "justified upon infinite reflection", being subtly similar to "probability of the decision to sign up being correct", is too much of a ruse to use without very careful elaboration. A decision can be absolutely, 99.999999% correct, while the probability of it being good remains at 1%, both known to the decider.
0Will_Newsome14y
So you read footnote 2 of the post and do not think it is a relevant and necessary distinction? And you read Steven's comment in the other thread where it seems he dissolved our disagreement and determined we were talking about different things? I know about the conservation of expected evidence. I understand and have demonstrated understanding of the content in the various links you've given me. I really doubt I've been making the obvious errors you accuse me of for the many months I've been conversing with people at SIAI (and at Less Wrong meetups and at the decision theory workshop) without anyone noticing. Here's a basic summary of what you seem to think I'm confused about: There is a broad concept of identity in my head. Given this concept of identity I do not want to sign up for cryonics. If this concept of identity changed such that the set of computations I identified with became smaller, then cryonics would become more appealing. I am talking about the probability of expected utility, not the probability of an event. The first is in the map (even if the map is in the territory, which I realize, of course), the second is in the territory. EDIT: I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.
2Vladimir_Nesov14y
That preference is yours and yours alone, without any community to share it, doesn't make its content any less of a fact than if you'd had a whole humanity of identical people to back it up. (This identity/probability discussion is tangential to a more focused question of correctness of choice.)
0Vladimir_Nesov14y
The easiest step is for you to look over the last two paragraphs of this comment and see if you agree with that. (Agree/disagree in what sense, if you suspect essential interpretational ambiguity.) I don't know why you brought up the concept of identity (or indeed cryonics) in the above, it wasn't part of this particular discussion.
0Will_Newsome14y
At first glance and 15 seconds of thinking, I agree, but: "but that change of your belief doesn't influence how good X actually is" is to me more like "but that change of your belief doesn't influence how good X will be considered upon an infinite amount of infinitely good reflection".
0Vladimir_Nesov14y
Now try to figure out what does the question "What color the sky actually is?" mean, when compared with "How good X actually is?" and your interpretation "How good will X seem after infinite amount of infinitely good reflection". The "infinitely good reflection" thing is a surrogate for the fact itself, no less in the first case, and no more in the second. If you essentially agree that there is fact of the matter about whether a given decision is the right one, what did you mean by the following? You can't "assign utility as you please", this is not a matter of choice. The decision is either correct or it isn't, and you can't make it correct or incorrect by willing so. You may only work on figuring out which way it is, like with any other fact.
2Will_Newsome14y
Edit: adding a sentence in bold that is really important but that I failed to notice the first time. (Nick Tarleton alerted me to an error in this comment that I needed to fix.) Any intelligent agent will discover that the sky is blue. Not every intelligent agent will think that the blue sky is equally beautiful. Me, I like grey skies and rainy days. If I discover that I actually like blue skies at a later point, then that changes the perceived utility of seeing a grey sky relative to a blue one. The simple change in preference also changes my expected utility. Yes, maybe the new utility was the 'correct' utility all along, but how is that an argument against anything I've said in my posts or comments? I get the impression you consistently take the territory view where I take the map view, and I further think that the map view is way more useful for agents like me that aren't infinitely intelligent nor infinitely reflective. (Nick Tarleton disagrees about taking the map view and I am now reconsidering. He raises the important point that taking the territory view doesn't mean throwing out the map, and gives the map something to be about. I think he's probably right.) And the way one does this is by becoming good at luminosity and discovering what one's terminal values are. Yeah, maybe it turns out sufficiently intelligent agents all end up valuing the exact same thing, and FAI turns out to be really easy, but I do not buy it as an assertion.
0Vladimir_Nesov14y
This reads to me like See the error? That there are moral facts doesn't imply that everyone's preference is identical, that "all intelligent agents" will value the same thing. Every sane agent should agree on what is moral, but not every sane agent is moved by what is moral, some may be moved by what is prime or something, while agreeing with you that what is prime is often not moral. (See also this comment.)
2Blueberry14y
I'm a little confused about your "weight of a person" example because 'a' is ambiguous in English. Did you mean one specific person, or the weighing of different people? What if CEV doesn't exist, and there really are different groups of humans with different values? Is one set of values "moral" and the other "that other human thing that's analogous to morality but isn't morality"? Primeness is so different from morality that it's clear we're talking about two different things. But say we take what you're calling morality and modify it very slightly, only to the point where many humans still hold to the modified view. It's not clear to me that the agents will say "I'm moved by this modified view, not morality". Why wouldn't they say "No, this modification is the correct morality, and I am moved by morality!" I have read the metaethics sequence but don't claim to fully understand it, so feel free to point me to a particular part of it.
0Vladimir_Nesov14y
Of course different people have different values. These values might be similar, but they won't be identical. Yes, but what is "prime number"? Is it 5, or is it 7? 5 is clearly different from 7, although it's very similar to it in that it's also prime. Use the analogy of prime=moral and 5=Blueberry's values, 7=Will's values. Because that would be pointless disputing of definitions - clearly, different things are meant by word "morality" in your example.
0Blueberry14y
I see your point, but there is an obvious problem with this analogy: prime and nonprime are two discrete categories. But we can consider a continuum of values, ranging from something almost everyone agrees is moral, through values that are unusual or uncommon but still recognized as human values, all the way to completely alien values like paperclipping. My concern is that it's not clear where in the continuum the values stop being "moral" values, unlike with prime numbers.
0Vladimir_Nesov14y
It might be unclear where the line lies, but it shouldn't make the concept itself "fuzzy", merely not understood. What we talk about when we refer to a certain idea is always something specific, but it's not always clear what is implied by what we talk about. That different people can interpret the same words as referring to different ideas doesn't make any of these different ideas undefined. The failure to interpret the words in the same way is a failure of communication, not a characterization of the idea that failed to be communicated. I of course agree that "morality" admits a lot of similar interpretations, but I'd venture to say that "Blueberry's preference" does as well. It's an unsolved problem - a core question of Friendly AI - to formally define any of the concepts interpreting these words in a satisfactory way. The fuzziness in communication and elusiveness in formal understanding are relevant equally for the aggregate morality and personal preference, and so the individual/aggregate divide is not the point that particularly opposes the analogy.
0Blueberry14y
I'm still very confused. Do you think there is a clear line between what humans in general value (morality) and what other entities might value, and we just don't know where it is? Let's call the other side of the line 'schmorality'. So a paperclipper's values are schmoral. Is it possible that a human could have values on the other side of the line (schmoral values)? Suppose another entity, who is on the other side of the line, has a conversation with a human about a moral issue. Both entities engage in the same kind of reasoning, use the same kind of arguments and examples, so why is one reasoning called "moral reasoning" and the other just about values (schmoral reasoning)? Suppose I am right on the edge of the line. So my values are moral values, but a slight change makes these values schmoral values. From my point of view, these two sets of values are very close. Why do you give them completely different categories? And suppose my values change slightly over time, so I cross the line and back within a day. Do I suddenly stop caring about morality, then start again? This discontinuity seems very strange to me.
1Vladimir_Nesov14y
I don't say that any given concept is reasonable for all purposes, just that any concept has a very specific intended meaning, at the moment it's considered. The concept of morality can be characterized as, roughly, referring to human-like preference, or aggregate preference of humanity-like collections of individual preferences - this is a characterization resilient to some measure of ambiguity in interpretation. The concepts themselves can't be negotiated, they are set in stone by their intended meaning, though a different concept may be better for a given purpose.
0Blueberry14y
Thanks! That actually helped a lot.
1Nick_Tarleton14y
In this exchange Will, by "definition of identity", meant a part of preference, making the point that people might have varying preferences (this being the sense in which preference is "subjective") that make cryonics a good idea for some but not others. He read your response as a statement of something like moral realism/externalism; he intended his response to address this, though it was phrased confusingly.
0Vladimir_Nesov14y
That would be a potentially defensible view (What are the causes of variation? How do we know it's there?), but I'm not sure it's Will's (and using the word "definition" in this sense goes very much against the definition of "definition").
2Will_Newsome14y
Similar to what I think JoshuaZ was getting at, signing up for cryonics is a decently cheap signal of your rationality and willingness to take weird ideas seriously, and it's especially cheap for young people like me who might never take advantage of the 'real' use of cryonics.
1JoshuaZ14y
Really? Even if you buy into Will's estimate, there are at least three arguments that are not weak: 1) The expected utility argument (I presented above arguments for why this fails, but it isn't completely clear that those rebuttals are valid) 2) One might think that buying into cryonics helps force people (including oneself) to think about the future in a way that produces positive utility. 3) One gets a positive utility from the hope that one might survive using cryonics. Note that all three of these are fairly standard pro-cryonics arguments that all are valid even with the low probability estimate made by Will.
1Roko14y
none of those hold for p = 1 in a million. Expected utility doesn't hold because you can use the money to give yourself more than a + 1 in a million chance of survival to the singularity, for example by buying 9000 lottery tickets and funding SIAI if you win. 1 in a million is really small.
0JoshuaZ14y
That really depends a lot on the expected utility. Moreover, argument 2 above (getting people to think about long-term prospects) has little connection to the value of p.
1Roko14y
The point about thinking more about the future with cryo is that you expect to be there. p=1 in 1 million means you don't expect to be there.
0JoshuaZ14y
Even a small chance that you will be there helps put people in the mind-set to think long-term.
0timtyler14y
Re: "whether it is plausible to rationally reject it" Of course people can plausibly rationally reject cryonics! Surely nobody has been silly enough to argue that cryonics makes good financial sense - irrespective of your goals and circumstances.
1Roko14y
If your goals don't include self-preservation, then it is not for you.
-1timtyler14y
In biology, individual self-preservation is a emergent subsidiary goal - what is really important is genetic self-preservation. Organisms face a constant trade-off - whether to use resources now to reproduce, or whether to invest them in self-perpetuation - in the hope of finding a better chance to reproduce in the future. Calorie restriction and cryonics are examples of this second option - sacrificing current potential for the sake of possible future gains.
5Morendil14y
Evolution faces this trade-off. Individual organisms are just stuck with trade-offs already made, and (if they happen to be endowed with explicit motivations) may be motivated by something quite other than "a better chance to reproduce in the future".
-2timtyler14y
Organisms choose - e.g. they choose whether to do calorie restriction - which diverts resources from reproductive programs to maintenance ones. They choose whether to divert resources in the direction of cryonics companies as well.
0Morendil14y
I'm not disputing that organisms choose. I'm disputing that organisms necessarily have reproductive programs. (You can only face a trade-off between two goals if you value both goals to start with.) Some organisms may value self-preservation, and value reproduction not at all (or only insofar as they view it as a form of self-preservation).
0timtyler14y
Not all organisms choose - for example, some have strategies hard-wired into them - and others are broken.

This post seems to focus too much on Singularity related issues as alternative arguments. Thus, one might think that if one assigns the Singularity a low probability one should definitely take cryonics. I'm going to therefore suggest a few arguments against cryonics that may be relevant:

First, there are other serious existential threats to humans. Many don't even arise from our technology. Large asteroids would be an obvious example. Gamma ray bursts and nearby stars going supernova are other risks. (Betelgeuse is a likely candidate for a nearby supernova... (read more)

5orthonormal14y
And if the expected utility of cryonics is simply a very large yet finite positive quantity?
2JoshuaZ14y
In that case, arguments that cryonics is intrinsically the better choice become much more dependent on specific estimates of utility and probability.
7Vladimir_Nesov14y
And so they should.

It would be interesting to see a more thorough analysis of whether the "rational" objections to cryo actually work.

For example, the idea that money is better spent donated to some x-risk org than to your own preservation deserves closer scrutiny. Consider that cryo is cheap ($1 a day) for the young, and that getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned.

5timtyler14y
Most people already have a reason to care about the future - since it contains their relatives and descendants - and those are among the things that they say they care about. If you are totally sterile - and have no living relatives - cryonics might seem like a reasonable way of perpetuating your essence - but for most others, there are more conventional options.
5Roko14y
Interest rates over the past 20 years have been about 7%, implying that people's half-life of concern for the future is only about 15 years. I think that the reason that people say they care about their childrens' future but actual interest rates set a concern half-life of 15 years is that people's far-mode verbalizations do not govern their behavior that much. Cryo would give people a strong selfish interest in the future, and since psychological time between freezing and revival is zero, discount rates wouldn't hurt so much. Let me throw out the figure of 100 years as the kind of timescale of concern that's required.
5taw14y
This is plain wrong. Most of these rates is inflation premium (premium for inflation you need to pay is higher than actual inflation because you also bear entire risk if inflation gets higher than predicted, and it cannot really get lower than predicted - it's not normally distributed). Inflation-adjusted US treasury bonds have rates like 1.68% a year over last 12 years., and never really got much higher than 3%. For most interest rates like the UK ones you quote there's non-negligible currency exchange risk and default risk in addition to all that.
1Vladimir_M14y
taw: Not to mention that even these figures are suspect. There is no single obvious or objectively correct way to calculate the numbers for inflation-adjustment, and the methods actually used are by no means clear, transparent, and free from political pressures. Ultimately, over a longer period of time, these numbers have little to no coherent meaning in any case.
0Roko14y
It is true that you have to adjust for inflation. 1.68% seems low to me. Remember that those bonds may sell at less than their face value, muddying the calculation. This article quotes 7% above inflation for equity.
1taw14y
It seems low but it's correct. Risk-free interests rate are very very low. Individual stocks carry very high risk, so this is nowhere near correct calculation. And even if you want to invest in S&P index - notice the date - 2007. This is a typical survivorship bias article from that time. In many countries stock markets crashed hard, and failed to rise for decades. Not just tiny countries, huge economies like Japan too. And by 2010 the same is true about United States too (and it would be ever worse if it wasn't for de facto massive taxpayers subsidies) Here's Wikipedia: * Empirically, over the past 40 years (1969–2009), there has been no significant equity premium in (US) stocks. This wasn't true back in 2007.
1Roko14y
Actually, yes, there is such a web app It comes out at a rate of 4.79% PA if you reinvest dividends, and 1.6% if you don't, after adjustment for inflation. If you're aiming to save efficiently for the future, you would reinvest dividends. 4.79^41 = 6.81 So your discount factor over 41 years is pretty huge. For 82 years that would be a factor of 46, and for 100 years that's a factor of 107.
6taw14y
This is all survivorship bias and nothing more, many other stock exchanges crashed completely or had much lower returns like Japanese.
1SilasBarta14y
And I should add that markets are wickedly anti-inductive. With all the people being prodded into the stock market by tax policies and "finance gurus" ... yeah, the risk is being underpriced. Also, there needs to be a big shift, probably involving a crisis, before risk-free rates actually make up for taxation, inflation, and sovereign risk. After that happens, I'll be confident the return on capital will be reasonable again.
0Roko14y
I presume that you mean cases where some violent upheaval caused property right violation, followed by the closing of a relevant exchange? I agree that this is a significant problem. What is the real survival ratio for exchanges between 1870 and 2010? However, let us return to the original point: that cryo would make people invest more in the future. Suppose I get a cryo contract and expect to be reanimated 300 years hence. Suppose that I am considering whether to invest in stocks, and I expect 33% of major exchanges to actually return my money if I am reanimated. I split my money between, say, 10 exchanges, and in those that survive, I get 1.05^300 or 2,200,000 times more than I invested - amply making up for exchanges that don't survive.
0Roko14y
So are you saying that the S&P returned 1.0168^41 times more than you invested, if you invested in 1969 and pulled out today? Is there a web app that we can test that on?
5timtyler14y
Levels of concern about the future vary between individuals - whereas interest rates are a property of society. Surely these things are not connected! High interest rates do not reflect a lack of concern about the future. They just illustrate how much money your government is printing. Provided you don't invest in that currency, that matters rather little. I agree that cryonics would make people care about the future more. Though IMO most of the problems with lack of planning are more to do with the shortcomings of modern political systems than they are to do with voters not caring about the future. The problem with cryonics is the cost. You might care more, but you can influence less - because you no longer have the cryonics money. If you can't think of any more worthwhile things to spend your money on, go for it.
0Larks14y
Real interest rates should be fairly constant (nominal interest rates will of course change with inflation), and reflect the price the marginal saver needs to postpone consumption, and the highest price the marginal borrower will pay to bring his forward. If everyone had very low discount rates, you wouldn't need to offer savers so much, and borrowers would consider the costs more prohibitive, so rates would fall.
5taw14y
They're nothing of the kind. See this. Inflation-adjusted as-risk-free-as-it-gets rates vary between 0.2%/year to 3.4%/year. This isn't about discount rates, it's about supply and demand of investment money, and financial sector essentially erases any connection with people's discount rates.
2Larks14y
Point taken; I concede the point. Evidently saving/borrowing rates are sticky, or low enough to be not relevant.
0timtyler14y
Perhaps decide to use gold, then. Your society's interest rate then becomes irrelevant to you - and you are free to care about the future as much - or as little - as you like. Interest rates just do not reflect people's level of concern about the future. Your money might be worth a lot less in 50 years - but the same is not necessarily true of your investments. So - despite all the discussion of interest rates - the topic is an irrelevant digression, apparently introduced through fallacious reasoning.
4Will_Newsome14y
Good point: mainstream cryonics would be a big step towards raising the sanity waterline, which may end up being a prerequisite to reducing various kinds of existential risk. However, I think that the causal relationship goes the other way, and that raising the sanity waterline comes first, and cryonics second: if you can get the average person across the inferential distance to seeing cryonics as reasonable, you can most likely get them across the inferential distance to seeing existential risk as really flippin' important. (I should take the advice of my own post here and note that I am sure there are really strong arguments against the idea that working to reduce existential risk is important, or at least against having much certainty that reducing existential risk will have been the correct thing to do upon reflection, at the very least on a personal level.) Nonetheless, I agree further analysis is necessary, though difficult.
4Roko14y
But how do we know that's the way it will pan out? Raising the sanity waterline is HARD. SUPER-DUPER HARD. Like, you probably couldn't make much of a dent even if you had a cool $10 million in your pocket. An alternative scenario is that cryonics gets popular without any "increases in general sanity", for example because the LW/OB communities give the cryo companies a huge increase in sales and a larger flow of philanthropy, which allows them to employ a marketing consultancy to market cryo to market cryonics to exactly the demographic who are already signing up, where additional signups come not from increased population sanity, but from just marketing cryo so that 20% of those who are sane enough to sign up hear about it, rather than 1%. I claim that your $10M would be able to increase cryo signup by a factor of 20, but probably not dent sanity.
5Will_Newsome14y
Your original point was that "getting cryo to go mainstream would be a strong win as far as existential risk reduction is concerned (because then the public at large would have a reason to care about the future) and as far as rationality is concerned", in which case your above comment is interesting, but tangential to what we were discussing previously. I agree that getting people to sign up for cryonics will almost assuredly get more people to sign up for cryonics (barring legal issues becoming more salient and thus potentially more restrictive as cryonics becomes more popular, or bad stories publicized whether true or false), but "because then the public at large would have a reason to care about the future" does not seem to be a strong reason to expect existential risk reduction as a result (one counterargument being the one raised by timtyler in this thread). You have to connect cryonics with existential risk reduction, and the key isn't futurism, but strong epistemic rationality. Sure, you could also get interest sparked via memetics, but I don't think the most cost-effective way to do so would be investment in cryonics as opposed to, say, billboards proclaiming 'Existential risks are even more bad than marijuana: talk to your kids.' Again, my intuitions are totally uncertain about this point, but it seems to me that the option a) 10 million dollars -> cryonics investment -> increased awareness in futurism -> increased awareness in existential risk reduction, is most likely inferior to option b) 10 million dollars -> any other memetic strategy -> increased awareness in existential risk reduction.
0Roko14y
It is true that there are probably better ways out there to reduce x-risk than via cryo, i.e. the first $10M you have should go into other stuff, so the argument would carry for a strict altruist to not get cryo. However, the fact that cryo is both cheap and useful in and of itself means that the degree of self-sacrificingness required to decide against it is pretty high. For example, your $1 a day on cryo provides the following benefits to x-risk: * potentially increased personal commitment from you * network effects causing others to be more likely to sign up and therefore not die and potentially be more concerned and committed * revenue and increased numbers/credibility for cyro companies * potentially increased rationality because you expect more to actually experience the future Now you could sacrifice your $1 a day and get more x-risk reduction by spending it on direct x-risk efforts (in addition to the existing time and money you are putting that way), BUT if you;re going to do that, then why not sacrifice another marginal $1 a day of food/entertainment money? Benton house has not yet reached the level of eating the very cheapest possible food and doesn't yet spend $0 per person per day on luxuries. And if you continue to spend more than $1 a day on food and luxuries, do you really value your life at less than one Hershey bar a day? I think that there is another explanation: people are using extreme altruism as a cover for their own irrationality, and if a situation came up where they could either contribute net +$9000 (cost of cryo) to x-risk right now but die OR not die, they would choose to not die. In fact, I believe that a LW commenter has worked out how to sacrifice your life for a gain of a whole $1,000,000 to x-risk using life insurance and suicide. As far as I know, people who don't sign up for cryo for altruistic reasons are not exactly flocking to this option. (EDIT: I'll note that this comment does constitute a changing argument in res
6Will_Newsome14y
I think the correct question here is instead "Do you really value a very, very small chance at you having been signed up for cryonics leading to huge changes in your expected utility in some distant future across unfathomable multiverses more than an assured small amount of utility 30 minutes from now?" I do not think the answer is obvious, but I lean towards avoiding long-term commitments until I better understand the issues. Yes, a very very very tiny amount of me is dying everyday due to freak kitchen accidents, but that much of my measure is so seemingly negligible that I don't feel too horrible trading it off for more thinking time and half a Hershey's bar. The reasons you gave for spending a dollar a day on cryonics seem perfectly reasonable and I have spent a considerable amount of time thinking about them. Nonetheless, I have yet to be convinced that I would want to sign up for cryonics as anything more than a credible signal of extreme rationality. From a purely intuitive standpoint this seems justified. I'm 18 years old and the singularity seems near. I have measure to burn.
6Roko14y
Can you give me a number? Maybe we disagree because of differing probability estimates that cryo will save you.
3Will_Newsome14y
Perhaps. I think a singularity is more likely to occur before I die (in most universes, anyway). With advancing life extension technology, good genes, and a disposition to be reasonably careful with my life, I plan on living pretty much indefinitely. I doubt cryonics has any effect at all on these universes for me personally. Beyond that, I do not have a strong sense of identity, and my preferences are not mostly about personal gain, and so universes where I do die do not seem horribly tragic, especially if I can write down a list of my values for future generations (or a future FAI) to consider and do with that they wish. So basically... (far) less than a 1% chance of saving 'me', but even then, I don't have strong preferences for being saved. I think that the technologies are totally feasible and am less pessimistic than others that Alcor and CI will survive for the next few decades and do well. However, I think larger considerations like life extension technology, uFAI or FAI, MNT, bioweaponry, et cetera, simply render the cryopreservation / no cryopreservation question both difficult and insignificant for me personally. (Again, I'm 18, these arguments do not hold equally well for people who are older than me.)
8Airedale14y
When I read this, two images popped unbidden into my mind: 1) you wanting to walk over the not-that-stable log over the stream with the jagged rocks in it and 2) you wanting to climb out on the ledge at Benton House to get the ball. I suppose one person's "reasonably careful" is another person's "needlessly risky."
2Will_Newsome14y
This comment inspired me to draft a post about how much quantum measure is lost doing various things, so that people can more easily see whether or not a certain activity (like driving to the store for food once a week instead of having it delivered) is 'worth it'.
1Will_Newsome14y
Ha, good times. :) But being careful with one's life and being careful with one's limb are too very different things. I may be stupid, but I'm not stupid.
3Jonathan_Graehl14y
Unless you're wearing a helmet, moderate falls that 99+% of the time just result in a few sprains/breaks, may <1% of the time give permanent brain damage (mostly I'm thinking of hard objects' edges striking the head). Maybe my estimation is skewed by fictional evidence.
3Will_Newsome14y
So a 1 in a 100 chance of falling and a roughly 1 in a 1,000 chance of brain damage conditional on that (I'd be really surprised if it was higher than that; biased reporting and what not) is about a 1 in 100,000 chance of severe brain damage. I have put myself in such situations roughly... 10 times in my life. I think car accidents when constantly driving between SFO and Silicon Valley are a more likely cause of death, but I don't have the statistics on hand.
0Jonathan_Graehl14y
Good point about car risks. Sadly, I was considerably less cautious when I was younger - when I had more to lose. I imagine this is often the case.
2Roko14y
How much far less? 0? 10^-1000? [It is perfectly OK for you to endorse the position of not caring much about yourself whilst still acknowledging the objective facts about cryo, even if they seem to imply that cryo could be used relatively effectively to save you ... facts =! values ...]
0Will_Newsome14y
Hm, thanks for making me really think about it, and not letting me slide by without doing calculation. It seems to me, given my preferences, about which I am not logically omniscient, and given my structural uncertainty around these issues, of which there is much, I think that my 50 percent confidence interval is between .00001%, 1 in 10 million, to .01%, 1 in ten thousand.
1Roko14y
shouldn't probabilities just be numbers? i.e. just integrate over the probability distribution of what you think the probability is.
0Will_Newsome14y
Oh, should they? I'm the first to admit that I sorely lack in knowledge of probability theory. I thought it was better to give a distribution here to indicate my level of uncertainty as well as my best guess (precision as well as accuracy).
4orthonormal14y
Contra Roko, it's OK for a Bayesian to talk in terms of a probability distribution on the probability of an event. (However, Roko is right that in decision problems, the mean value of that probability distribution is quite an important thing.)
0Roko14y
This would be true if you were estimating the value of a real-world parameter like the length of a rod. However, for a probability, you just give a single number, which is representative of the odds you would bet at. If you have several conflicting intuitions about what that number should be, form a weighted average of them, weighted by how much you trust each intuition or method for getting the number.
0Will_Newsome14y
Ahhh, makes sense, thanks. In that case I'd put my best guess at around 1 in a million.
2Roko14y
For small probabilities, the weighted average calculation is dominated by the high-probability possibilities - if your 50% confidence interval was up to 1 in 10,000, then 25% of the probability probability mass is to the right of 1 in 10,000, so you can't say anything less than (0.75)x0 + (0.25)x1 in 10000 = 1 in 40,000.
0Will_Newsome14y
I wasn't using a normal distribution in my original formulation, though: the mean of the picture in my head was around 1 in a million with a longer tail to the right (towards 100%) and a shorter tail to the left (towards 0%) (on a log scale?). It could be that I was doing something stupid by making one tail longer than the other?
0Jonathan_Graehl14y
It would only be suspicious if your resulting probability were a sum of very many independent, similarly probable alternatives (such sums do look normal even if the individual alternatives aren't).
0Vladimir_Nesov14y
I'd say your preference can't possibly influence the probability of this event. To clear up the air, can you explain how does taking into account your preference influence the estimate? Better, how does the estimate break up on the different defeaters (events making the positive outcome impossible)?
2Will_Newsome14y
Sorry, I should have been more clear: my preferences influence the possible interpretations of the word 'save'. I wouldn't consider surviving indefinitely but without my preferences being systematically fulfilled 'saved', for instance; more like damned.
0kpreid14y
I like this turn of phrase.
1[anonymous]14y
It's cheap because you will not actually die in the near future. ETA: though it sounds as if you're paying mostly to be allowed to keep having cheap life insurance in the future?

Here's another possible objection to cryonics:

If an Unfriendly AI Singularity happens while you are vitrified, it's not just that you will fail to be revived - perhaps the AI will scan and upload you and abuse you in some way.

"There is life eternal within the eater of souls. Nobody is ever forgotten or allowed to rest in peace. They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life." OK, that's generalising from fictional evidence, but consider the following scenario:

Suppose the Singularity d... (read more)

3humpolec14y
What you're describing is an evil AI, not just an unFriendly one - unFriendly AI doesn't care about your values. Wouldn't an evil AI be even harder to achieve than a Friendly one?
2dripgrind14y
An unFriendly AI doesn't necessarily care about human values - but I can't see why, if it was based on human neural architecture, it might not exhibit good old-fashioned human values like empathy - or sadism. I'm not saying that AI would have to be based on human uploads, but it seems like a credible path to superhuman AI. Why do you think that an evil AI would be harder to achieve than a Friendly one?
5humpolec14y
Agreed, AI based on a human upload gives no guarantee about its values... actually right now I have no idea about how Friendliness of such AI could be ensured. Maybe not harder, but less probable - 'paperclipping' seems to be a more likely failure of friendliness than AI wanting to torture humans forever. I have to admit I haven't thought much about this, though.
8Baughn14y
Paperclipping is a relatively simple failure. The difference between paperclipping and evil is mainly just that - a matter of complexity. Evil is complex, turning the universe into tuna is decidedly not. On the scale of friendliness, I ironically see an "evil" failure (meaning, among other things, that we're still in some sense around to notice it being evil) becoming more likely as friendliness increases. As we try to implement our own values, failures become more complex, and less likely to be total - thus letting us stick around to see them.
1wedrifid12y
"Where in this code do I need to put this "-ve" sign again?" The two are approximately equal in difficulty, assuming equivalent flexibility in how "Evil" or "Friendly" it would have to be to qualify for the definition.

Good post. People focus only on the monetary cost of cryonics, but my impression is there are also substantial costs from hassle and perceived weirdness.

1Torben14y
Really? I may be lucky, but I have quite the opposite experience. Of course, I haven't signed up due to my place of residence but I have mentioned it to friends and family and they don't seem to think much about it.

One easily falls to the trap of thinking that disagreements with other people happen because the others are irrational in simple, obviously flawed ways. It's harder to avoid the fundamental attribution error and the typical mind fallacy, and admit that the others may have a non-insane reason for their disagreement.

Harder, not harder, but which is actually right? This is not about signaling one's ability to do the harder thing.

The reasons you listed are not ones moving most people to not sign up for cryonics. Most people, as you mention at the beginning, simply don't take the possibility seriously enough to even consider it in detail.

1Will_Newsome14y
I agree, but there exists a non-negligible amount of people that have not obviously illegitimate reasons for not being signed up: not most of the people in the world, and maybe not most of Less Wrong, but at least a sizable portion of Less Wrongers (and most of the people I interact with on a daily basis at SIAI). It seems that somewhere along the line people started to misinterpret Eliezer (or something) and group the reasonable and unreasonable non-cryonauts together.
1Vladimir_Nesov14y
Then state the scope of the claim explicitly in the post.
0Will_Newsome14y
Bolded and italicized; thanks for the criticism, especially as this is my first post on Less Wrong.

I think cryonics is a great idea and should be part of health care. However, $50,000 is a lot of money to me and I'm reluctant to spend money on life insurance, which except in the case of cryonics is almost always a bad bet.

I would like my brain to be vitrified if I am dead, but I would prefer not to pay $50,000 for cryonics in the universes where I live forever, die to existential catastrophe, or where cryonics just doesn't work.

What if I specify in my (currently non-existent) cryonics optimized living will that up to $100,000 from my estate is to be used to pay for cryonics? It's not nearly as secure as a real cryonics contract, but it has the benefit of not costing $50,000.

4khafra14y
Alcor recommends not funding out of your estate, because in the current legal system any living person with the slightest claim will take precedence over the decedent's wishes. Even if the money eventually goes to Alcor, it'll be after 8 months in probate court; and your grey matter's unlikely to be in very good condition for preservation at that point.
3Kevin14y
I know they don't recommend this, but I suspect a sufficiently good will and trust setup would have a significant probability of working, and the legal precedent set by that would be beneficial to other potential cryonauts.
2Will_Newsome14y
This sounds like a great practical plan if you can pull it off, and, given your values, possibly an obviously correct course of action. However, it does not answer the question of whether being vitrified after death will be seen as correct upon reflection. The distinction here is important.
0Blueberry14y
I'm not sure if cryonics organizations would support that option, as it would be easier for potential opponents to defeat. Also, it wouldn't protect you against accidental death, if I'm understanding correctly, only against an illness that incapacitated you.

I'm surprised that you didn't bring up what I find to be a fairly obvious problem with Cryonics: what if nobody feels like unthawing you? Of course, not having followed this dialogue I'm probably missing some equally obvious counter to this argument.

3Bo10201014y
If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down. It sounds like Pascal's Wager (small chance at success, potentially infinite payoff), but it doesn't fail for the same reasons Pascal's Wager does (Pascal's gambit for one religion would work just as well for any other one.) - discussed here a while back.
-4timtyler14y
Re: "If I were defending cryonics, I would say that a small chance of immortality beats sure death hands-down." That's what advocates usually say. It assumes that the goal of organisms is not to die - which is not a biologically realistic assumption.

Hi, I'm pretty new here too. I hope I'm not repeating an old argument, but suspect I am; feel free to answer with a pointer instead of a direct rebuttal.

I'm surprised that no-one's mentioned the cost of cryonics in relation to the reduction in net human suffering that could come from spending the money on poverty relief instead. For (say) USD $50k, I could save around 100 lives ($500/life is a current rough estimate at lifesaving aid for people in extreme poverty), or could dramatically increase the quality of life of 1000 people (for example, cataract o... (read more)

6knb14y
This is also an argument against going to movies, buying coffee, owning a car, or having a child. In fact, this is an argument against doing anything beyond living at the absolute minimum threshold of life, while donating the rest of your income to charity. How can you say it's moral to value your own comfort as being worth more than 100-1000 other humans? They just did worse at the birth lottery, right?
3cjb14y
It's not really an argument against those other things, although I do indeed try to avoid some luxuries, or to match the amount I spend on them with a donation to an effective aid organization. What I think you've missed is that many of the items you mention are essential for me to continue having and being motivated in a job that pays me well -- well enough to make donations to aid organizations that accomplish far more than I could if I just took a plane to a place of extreme poverty and attempted to help using my own skills directly. If there's a better way to help alleviate poverty than donating a percentage of my developed-world salary to effective charities every year, I haven't found it yet.
5knb14y
Ah, I see. So when you spend money on yourself, it's just to motivate yourself for more charitable labor. But when those weird cryonauts spend money on themselves, they're being selfish! How wonderful to be you.
-1cjb14y
No, I'm arguing that it would be selfish for me to spend money on myself, if that money was on cryonics, where selfishness is defined as (a) spending an amount of money that could relieve a great amount of suffering, (b) on something that doesn't relate to retaining my ability to get a paycheck. One weakness in this argument is that there could be a person who is so fearful of death that they can't live effectively without the comfort that signing up for cryonics gives them. In that circumstance, I couldn't use this criticism.
3Blueberry14y
Cryonics is comparable to CPR or other emergency medical care, in that it gives you extra life after you might otherwise die. Of course it's selfish, in the sense that you're taking care of yourself first, to spend money on your medical care, but cryonics does relate to your ability to get a paycheck (after your revival). To be consistent, are you reducing your medical expenses in other ways?
-1cjb14y
.. at a probability of (for the sake of argument) one in a million. Do I participate in other examples of medical care that might save my life with probability one in a million (even if they don't cost any money)? No, not that I can think of.
0Morendil14y
Did you ever get any vaccination shots? Some of these are for diseases that have become quite rare.
0cjb14y
That's true. I didn't spend my own money on them (I grew up in the UK), and they didn't cost very much in comparison, but I agree that it's a good example of a medical long shot.
1Morendil14y
Yep, the cost and especially the administrative hassles are, in comparison to the probability considerations, closer to the true reason I (for instance) am not signed up yet, in spite of seeing it as my best shot of insuring long life. To be fair, vaccination is also a long shot in terms of frequency, but definitely proven to work with close to certainty on any given patient. Cryonics is a long shot intrisically. But it might not be if more was invested in researching it, and more might be invested if cryonics was already used on a precautionary basis in situations where it would also save money (e.g. death row inmates and terminal patients) and risk nothing of significance (since no better outcome than death can be expected). In that sense it seems obviously rational to advocate cryonics as a method of assisted suicide, and only the "weirdness factor", religious-moralistic hangups and legislative inertia can explain the reluctance to adopt it more broadly.
4nazgulnarsil14y
like this: I value my subjective experience more than even hundreds of thousands of other similar-but-not-me subjective experiences. additionally, your argument applies to generic goods you choose over saving people, not just cryonics.
-1cjb14y
Well, sure, but I asked how it could be moral, not how you can evade the question by deciding that you don't have any responsibilities to anyone.
0nazgulnarsil14y
what are morals? I have preferences. sometimes they coincide with other people's preferences and sometimes they conflict. when they conflict In socially unacceptable ways I seek ways to hide or downplay them.
2Will_Newsome14y
One can expect to live a life at least 100-1000 times longer than those other poor people, or live a life that has at least 100-1000 times as much positive utility, as well as the points in the other comments. Although this argument is a decent one for some people, it's much more often the product of motivated cognition than carefully looking at the issues, so I did not include it in the post.
0cjb14y
Thanks for the reply. .. when you say "can expect to", what do you mean? Do you mean "it is extremely likely that.."? That's the problem. If it was a sure deal, it would be logical to spend the money on it -- but in fact it's extremely uncertain, whereas the $50 being asked for by a group like Aravind Eye Hospital to directly fund a cataract operation is (close to) relieving significant suffering with a probability of 1.

Another argument against cryonics is just that it's relatively unlikely to work (= lead to your happy revival) since it requires several things to go right. Robin's net present value calculation of the expected benefits of cryonic preservation isn't all that different from the cost of cryonics. With slightly different estimates for some of the numbers, it would be easy to end up with an expected benefit that's less than the cost.

0Will_Newsome14y
Given his future predictions, maybe, but the future predictions of a lot of smart people (especially singularitarians) can lead to drastically different expected values which often give the proposition of signing up for cryonics a Pascalian flavor.

et cetera, are not well-understood enough to make claims about whether or not you should even care about the number of 'yous' that are living or dying, whatever 'you' think you are.

This argument from confusion doesn't shift the decision either way, so it could as well be an argument for signing up, or against signing up; similarly for immediate suicide, or against that. On the net, this argument doesn't move, because there is no default to fall off to once you get more confused.

4steven046114y
I'd say the argument from confusion argues more strongly against benefits that are more inferential steps away. E.g., maybe it supports eating ice cream over cryonics but not necessarily existential risk reduction over cryonics.
1Will_Newsome14y
Correct: it is simply an argument against certainty in either direction. It is the certainty that I find worrisome, not the conclusion. Now that I look back, I think I failed to duly emphasize the symmetry of my arguments.
0Vladimir_Nesov14y
And which way is certainty? There is no baseline in beliefs, around the magical "50%". When a given belief diminishes, its opposite grows in strength. At which point are they in balance? Is the "normal" level of belief the same for everything? Russell's teapot? The sky is blue?
0Will_Newsome14y
Here I show my ignorance. I thought that I was describing the flattening of a probability distribution for both the propositions 'I will reflectively endorse that signing up for cryonics was the best thing to do' and 'I will reflectively endorse that not signing up for cryonics was the best thing to do'. (This is very different from the binary distinction 'Signing up for cryonics is the current best course of action' and 'Not signing up for cryonics is the best current course of action'.) You seem to be saying that this is meaningless because I am not flattening the distributions relative to anything else, whereas I have the intuition that I should be flattening them towards the shape of some ignorance prior (I would like to point out that I am using technical terms I do not fully understand here: I am a mere novice in Bayesian probability theory (as distinct from Bayesianism)). I feel like you have made a valid point but that I am failing to see it.
5steven046114y
So it looks like what's going on is you have estimates for U(cryonics) and U(not cryonics), and structural confusion increases the variance for both these utilities, and Vladimir is saying this doesn't change the estimate of U(cryonics) - U(not cryonics), and you're saying it increases P(U(not cryonics) > U(cryonics)) if your estimate of U(cryonics) starts out higher, and both of you are right?
0Will_Newsome14y
That seems correct to me.
0Will_Newsome14y
This is a try at resolving my own confusion: Suppose there is a fair coin that is going to flipped, and I have been told that it is biased towards heads, so I bet on heads. Suppose that I am then informed that it is in fact biased in a random direction: all of a sudden I should reconsider whether I think betting on heads is the best strategy. I might not decide to switch to tails (cost of switching, and anyway I had some evidence that heads was the direction of bias even if it later turned out to be less-than-totally-informative), but I will move the estimate of my success a lot closer to 50%. I seem to be arguing that when there's a lot of uncertainty about the model I should assume any given P and not-P are equally likely, because this seems like the best ignorance prior for a binary event about which I have very little information. When one learns there is a lot of structural/metaphysical uncertainty around the universe, identity, et cetera, one should revise their probabilities of any given obviously relevant P/not-P pair towards 50% each, and note that they would not be too surprised by any result being true (as they're expecting anything of everything to happen).

I am kind of disturbed by the idea of cryonics. Wouldn't it be theoretically possible to prove they don't work, assuming that they really don't. If the connections between neurons are lost in the process, then you have died.

4ata14y
Why? If it cannot work, then we would expect to find evidence that it cannot work, yes. But it sounds like you're starting from a specific conclusion and working backwards. Why do you want to "prove [it doesn't] work"? Alcor's FAQ has some information on the evidence indicating that cryonics preserves the relevant information. That depends on the preservation process starting quickly enough, though.
0Houshalter14y
Because if it doesn't, its a waste of time.

Interesting post, but perhaps too much is being compressed into a single expression.

The niceness and weirdness factors of thinking about cryonics do not actually affect the correctness of cryonics itself. The correctness factor depends only on one's values and the weight of probability.

Not thinking one's own values through sufficiently enough to make an accurate evaluation is both irrational and a common failure mode. Miscalculating the probabilities is also a mistake, though perhaps more a mathematical error than a rationality error.

When these are the r... (read more)

1Nick_Tarleton14y
On niceness, good point. On weirdness, I'm not sure what you mean; if you mean "weird stuff and ontological confusion", that is uncertainty about one's values and truths.

I have been heavily leaning towards the anti-cryonics stance at least for myself with the current state of information and technology. My reasons are mostly the following.

I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous... (read more)

Reason #7 not to sign up: There is a significant chance that you will suffer information-theoretic death before your brain can be subjected to the preservation process. Your brain could be destroyed by whatever it is that causes you to die (such as a head injury or massive stroke) or you could succumb to age-related dementia before the rest of your body stops functioning.

5JoshuaZ14y
In regards to dementia, it isn't at all clear that that will necessarily lead to information-theoretic death. We don't have a good enough understanding of dementia to know if the information is genuinely lost or just difficult to recover. The fact that many forms of dementia have more or less lucid periods and periods where they can remember who people are and other times where they cannot is all tentative evidence that the information is recoverable. Also, this argument isn't that strong an argument. This isn't going to be substantially altering whether or not it makes sense to sign up by more than probably an order of magnitude at the very most (relying on chance of violent death and chance that one will have dementia late in life).

Reason #5 to not sign up: Because life sucks.

0Will_Newsome14y
Huh, I think I may have messed up, because (whether I should admit it or not is unclear to me) I was thinking of you specifically when I wrote the second half of reason 4. Did I not adequately describe your position there?
0CronoDAS14y
You came pretty close.

Anyone else here more interested in cloning than cryonics?

Seems 100x more feasible.

8JoshuaZ14y
More feasible yes, but not nearly as interesting a technology. What will cloning do? If we clone to make new organs then it is a helpful medical technique, one among many. If we are talking about reproductive cloning, then that individual has no closer identity to me than an identical twin (indeed a bit less since the clone won't share the same environment growing up). The other major advantage of cloning is that we could potentially use it to deliberately clone copies of smart people. But that's a pretty minor use, and fraught with its own ethical problems. And that would still take a long time to be useful. Let's say we get practical cloning tomorrow. Even if some smart person agreed to be cloned, we'd still need to wait around 12 years at very minimum before they can be that useful. Cryonics is a much larger game changer than cloning.
3timtyler14y
Re: "Anyone else here more interested in cloning than cryonics?" Sure. Sexual reproduction is good too.
2Nick_Tarleton14y
Interested in what way? Do you see it as a plausible substitute good from the perspective of your values?
0Daniel_Burfoot14y
Yes. If cloning were an option today, and I were forced to choose cloning vs. cryonics, I would choose the former.
4Nick_Tarleton14y
What benefit do you see in having a clone of you?
1Daniel_Burfoot14y
I think by raising my own clone, I could produce a "more perfect" version of myself. He would have the same values, but an improved skill set and better life experiences.
2DanielVarga14y
You know what, I am quite content with a 50% faithful clone of myself. It is even possible that there is some useful stuff in that other 50%.
1Emile14y
Do you have any convincing reasons to believe that? How do you account for environmental differences?
-1Sniffnoy14y
What exactly would "choosing cloning" consist of?
0[anonymous]14y
Interested in what way? Do you highly value the existence of organisms with your genome?

I don't understand the big deal with this. Is it just selfishness? You don't care how good the world will be, unless you're there to enjoy it?

There's a much better, simpler reason to reject cryonics: it isn't proven. There might be some good signs and indications, but it's still rather murky in there. That being said, it's rather clear from prior discussion that most people in this forum believe that it will work. I find it slightly absurd, to be honest. You can talk a lot about uncertainties and supporting evidence and burden of proof and so on, but the simple fact remains the same. There is no proof cryonics will work, either right now, 20, or 50 years in the future. I hate to sound so cynical... (read more)

9JoshuaZ14y
This is a very bad argument. First, all claims are probabilistic, so it isn't even clear what you mean by proof. Second of all, I could under the exact same logic say that one shouldn't try anything that involves technology that doesn't exist yet because we don't know if it will actually work. So the argument has to fail.
5Morendil14y
That's a widely acknowledged fact. And, if you make that your actual reason for rejecting cryonics, there are some implications that follow from that: for instance, that we should be investing massively more in research aiming to provide proof than we currently are. The arguments we tend to hear are more along the lines of "it's not proven, it's an expensive eccentricity, it's morally wrong, and besides even if it were proved to work I don't believe I'd wake up as me so I wouldn't want it".
3Blueberry14y
I have no idea whether it will work, but right now, the only alternative is death. I actually think it's unlikely that people preserved now will ever be revived, more for social and economic reasons than technical ones.
4Baughn14y
How much do you believe it would cost? In as much as I'm for cryopreservation (but am having some trouble finding a way to do it in Norway - well, I'll figure something out), I've also decided to be the kind of person who would, if still alive once reviving them becomes technically possible, pay for reviving as many as I can afford. I tend to assume that other cryopreservationists think the same way. This means the chance of being revived, assuming nobody else wants to pay for it (including a possible FAI), is related to the proportion of cryopreservationists who are still alive divided by the cost of reviving someone, as a portion of their average income at the time. Thus, I wonder - how costly will it be?
1Blueberry14y
Once the infrastructure and technology for revival is established, it won't be very costly. The economic problem is getting that infrastructure and technology established in the first place. I would guess you're far more altruistic than most people. Really, as many as you can afford?
5Baughn14y
It's not altruism, it's selfishness. I'm precommiting myself to reviving others, if I have the opportunity; on the assumption that others do the same, this means the marginal benefit to me from signing up for cryopreservation goes up. And, admittedly, I expect to have a considerable amount of disposable income. "As many as I can afford" means "While maintaining a reasonable standard of living", but "reasonable" is relative; by deliberately not increasing it too much from what I'm used to as a student, I can get more slack without really losing utilons. It helps that my hobbies are, by and large, very cheap. Hiking and such. ;)