Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Consider the following commonly-made argument: cryonics is unlikely to work. Trained rationalists are signed up for cryonics at rates much greater than the general population. Therefore, rationalists must be pretty gullible people, and their claims to be good at evaluating evidence must be exaggerations at best.
This argument is wrong, and we can prove it using data from the last two Less Wrong surveys.
The question at hand is whether rationalist training - represented here by extensive familiarity with Less Wrong material - makes people more likely to believe in cryonics.
We investigate with a cross-sectional study, looking at proto-rationalists versus experienced rationalists. Define proto-rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for less than six months and have zero karma (usually indicative of never having posted a comment). And define experienced rationalists as those respondents to the Less Wrong survey who indicate they have been in the community for over two years and have >1000 karma (usually indicative of having written many well-received posts).
By these definitions, there are 93 proto-rationalists, who have been in the community an average of 1.3 months, and 134 experienced rationalists, who have been in the community an average of 4.5 years. Proto-rationalists generally have not read any rationality training material - only 20/93 had read even one-quarter of the Less Wrong Sequences. Experienced rationalists are, well, more experienced: two-thirds of them have read pretty much all the Sequence material.
Proto-rationalists thought that, on average, there was a 21% chance of an average cryonically frozen person being revived in the future. Experienced rationalists thought that, on average, there was a 15% chance of same. The difference was marginally significant (p < 0.1).
Marginal significance is a copout, but this isn't our only data source. Last year, using the same definitions, proto-rationalists assigned a 15% probability to cryonics working, and experienced rationalists assigned a 12% chance. We see the same pattern.
So experienced rationalists are consistently less likely to believe in cryonics than proto-rationalists, and rationalist training probably makes you less likely to believe cryonics will work.
On the other hand, 0% of proto-rationalists had signed up for cryonics compared to 13% of experienced rationalists. 48% of proto-rationalists rejected the idea of signing up for cryonics entirely, compared to only 25% of experienced rationalists. So although rationalists are less likely to believe cryonics will work, they are much more likely to sign up for it. Last year's survey shows the same pattern.
This is not necessarily surprising. It only indicates that experienced rationalists and proto-rationalists treat their beliefs in different ways. Proto-rationalists form a belief, play with it in their heads, and then do whatever they were going to do anyway - usually some variant on what everyone else does. Experienced rationalists form a belief, examine the consequences, and then act strategically to get what they want.
Imagine a lottery run by an incompetent official who accidentally sets it up so that the average payoff is far more than the average ticket price. For example, maybe the lottery sells only ten $1 tickets, but the jackpot is $1 million, so that each $1 ticket gives you a 10% chance of winning $1 million.
Goofus hears about the lottery and realizes that his expected gain from playing the lottery is $99,999. "Huh," he says, "the numbers say I could actually win money by playing this lottery. What an interesting mathematical curiosity!" Then he goes off and does something else, since everyone knows playing the lottery is what stupid people do.
Gallant hears about the lottery, performs the same calculation, and buys up all ten tickets.
The relevant difference between Goofus and Gallant is not skill at estimating the chances of winning the lottery. We can even change the problem so that Gallant is more aware of the unlikelihood of winning than Goofus - perhaps Goofus mistakenly believes there are only five tickets, and so Gallant's superior knowledge tells him that winning the lottery is even more unlikely than Goofus thinks. Gallant will still play, and Goofus will still pass.
The relevant difference is that Gallant knows how to take ideas seriously.
Taking ideas seriously isn't always smart. If you're the sort of person who falls for proofs that 1 = 2 , then refusing to take ideas seriously is a good way to avoid ending up actually believing that 1 = 2, and a generally excellent life choice.
On the other hand, progress depends on someone somewhere taking a new idea seriously, so it's nice to have people who can do that too. Helping people learn this skill and when to apply it is one goal of the rationalist movement.
In this case it seems to have been successful. Proto-rationalists think there is a 21% chance of a new technology making them immortal - surely an outcome as desirable as any lottery jackpot - consider it an interesting curiosity, and go do something else because only weirdos sign up for cryonics.
Experienced rationalists think there is a lower chance of cryonics working, but some of them decide that even a pretty low chance of immortality sounds pretty good, and act strategically on this belief.
This is not to either attack or defend the policy of assigning a non-negligible probability to cryonics working. This is meant to show only that the difference in cryonics status between proto-rationalists and experienced rationalists is based on meta-level cognitive skills in the latter whose desirability is orthogonal to the object-level question about cryonics.
(an earlier version of this article was posted on my blog last year; I have moved it here now that I have replicated the results with a second survey)
How I'm now on the fence about whether to sign up for cryonics
I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid.
My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.
(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)
I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)
However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying.
When queried, my brain tells me that it's doing an expected-value calculation and the expected value of cryonics to me is is too low to justify the costs; it's unlikely to succeed and the only reason some people have positive expected value for it is that they're multiplying that tiny number by the huge, huge number that they place on the value of my life. And my number doesn't feel big enough to outweigh those odds at that price.
Putting some numbers in that
If my brain thinks this is a matter of expected-value calculations, I ought to do one. With actual numbers, even if they're made-up, and actual multiplication.
So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value. Through a variety of helpful thought experiments (how much would I pay to cure a fatal illness if I were the only person in the world with it and research wouldn't help anyone but me and I could otherwise donate the money to EA charities; does the awesomeness of 3 million dewormings outway the suckiness of my death; is my death more or less sucky than the destruction of a high-end MRI machine), I've converged on a subjective value for my life of about $1 million. Like, give or take a lot.
Cryonics feels unlikely to work for me. I think the basic principle is sound, but if someone were to tell me that cryonics had been shown to work for a human, I would be surprised. That's not a number, though, so I took the final result of Steve Harris' calculations here (inspired by the Sagan-Drake equation). His optimistic number is a 0.15 chance of success, or 1 in 7; his pessimistic number is 0.0023, or less than 1/400. My brain thinks 15% is too high and 0.23% sounds reasonable, but I'll use his numbers for upper and lower bounds.
I started out trying to calculate the expected cost by some convoluted method where I was going to estimate my expected chance of dying each year and repeatedly subtract it from one and multiply by the amount I'd pay each year to calculate how much I could expect pay in total. Benquo pointed out to me that calculation like this are usually done using perpetuities, or PV calculations, so I made one in Excel and plugged in some numbers, approximating the Alcor annual membership fee as $600. Assuming my own discount rate is somewhere between 2% and 5%, I ran two calculations with those numbers. For 2%, the total expected, time-discounted cost would be $30,000; for a 5% discount rate, $12,000.
Excel also lets you do calculations on perpetuities that aren't perpetual, so I plugged in 62 years, the time by which I'll have a 50% chance of dying according to this actuarial table. It didn't change the final results much; $11,417 for a 5% discount rate and $21,000 for the 2% discount rate.
That's not including the life insurance payout you need to pay for the actual freezing. So, life insurance premiums. Benquo's plan is five years of $2200 a year and then nothing from then on, which apparently isn't uncommon among plans for young healthy people. I could probably get something as good or better; I'm younger. So, $11,00 for total life insurance premiums. If I went with permanent annual payment, I could do a perpetuity calculation instead.
In short: around $40,000 total, rounding up.
What's my final number?
There are two numbers I can output. When I started this article, one of them seemed like the obvious end product, so I calculated that. When I went back to finish this article days later, I walked through all the calculations again while writing the actual paragraphs, did what seemed obvious, ended up with a different number, and realized I'd calculated a different thing. So I'm not sure which one is right, although I suspect they're symmetrical.
If I multiply the value of my life by the success chance of cryonics, I get a number that represents (I think) the monetary value of cryonics to me, given my factual beliefs and values. It would go up if the value of my life to me went up, or if the chances of cryonics succeeding went up. I can compare it directly to the actual cost of cryonics.
I take $1 million and plug in either 0.15 or 0.00023, and I get $150,000 as an upper bound and $2300 as a lower bound, to compare to a total cost somewhere in the ballpark of $40,000.
If I take the price of cryonics and divide it by the chance of success (because if I sign up, I'm optimistically paying for 100 worlds of which I survive in 15, or pessimistically paying for 10,000 worlds in which I survive in 23), I get the total expected cost per my life being saved, which I can compare to the figure I place on the value of my life. It goes down if the cost of cryonics goes down or the chances of success go up.
I plug in my numbers and get a lower bound of $267,000 and an upper bound of 17 million.
In both those cases, the optimistic success estimates make it seem worthwhile and the pessimistic success estimates don't, and my personal estimate of cryonics succeeding falls closer to pessimism. But it's close. It's a lot closer than I thought it would be.
Updating somewhat in favour that I'll end up signed up for cryonics.
Fine-tuning and next steps
I could get better numbers for the value of my life to me. It's kind of squicky to think about, but that's a bad reason. I could ask other people about their numbers and compare what they're accomplishing in their lives to my own life. I could do more thought experiments to better acquaint my brain with how much value $1 million actually is, because scope insensitivity. I could do upper and lower bounds.
I could include the cost of organizations cheaper than Alcor as a lower bound; the info is all here and the calculation wouldn't be too nasty but I have work in 7 hours and need to get to bed.
I could do my own version of the cryonics success equation, plugging in my own estimates. (Although I suspect this data is less informed and less valuable than what's already there).
I could ask what other people think. Thus, write this post.
(First time poster, long time reader)
I'm currently volunteering for the Brain Preservation Foundation (http://www.brainpreservation.org/), and I'd like to ask for your help.
The purpose of the BPF is to incentivize and evaluate the development of technology which can preserve a human brain in such intricate detail that all of the brain's cells and connections are preserved. It's the only prize of its kind for a relatively endangered, yet essential type of research.
We run a cash prize ($100,000 USD) called the "Brain Preservation Technology Prize" for the first team that can preserve a large mammal's brain to our high standards. The first $25,000 of that prize goes to the first team that can preserve the ultrastructure of a mouse brain.
Steve Aoki (http://steveaoki.com/), a musician that you might have heard of, is currently planning to give around $50,000 to one of four brain-related charities. One of these charities is the Brain Preservation Foundation! Whichever charity gets the most votes will win all the money.
This money is critically important to us to get the necessary supplies and lab time to administer the brain preservation technology prize. Evaluating brains that people send us involves electron microscopy, which is quite expensive (around $8,000 to evaluate a brain!) We are currently getting submissions and this extra money will give us the funds we need to run the prize.
To vote, just visit http://on.fb.me/15XFdTG, and click the "like" button by the "Brain Preservation Foundation" comment. You can see a graph of the votes at http://aurellem.org/bpf/votes.png (updates every 15 minutes). Thanks for taking the time to read
More about the Brain Preservation Foundation :
More about the charity:
I'd also love to hear your own opinions on the BPF and your assessment of its effectiveness, as well as your thoughts on chemopreservation vs cryopreservation.
There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.
Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it . For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. 
The questions were:
- You die suddenly or in a circumstance where you would not be able to be frozen in time.
- You die of something where the brain is degraded at death.
- You die in a hospital that refuses access to you by the cryonics people.
- After death your relatives reject your wishes and don't let the cryonics people freeze you.
- Some law is passed that prohibits cryonics before you die.
- The cryonics people make a mistake in freezing you.
- Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).
- The current cryonics process is insufficient to preserve everything (even when perfectly executed).
- All people die (existential risks).
- Society falls apart (global catastrophic non-existential risks).
- Some time after you die cryonics is outlawed.
- All cryonics companies go out of business.
- The cryonics company you chose goes out of business.
- Your cryonics company screws something up and you are defrosted.
- It is impossible to extract all the information preserved in the frozen brain.
- The technology is never developed to extract the information.
- No one is interested in your brain's information.
- It is too expensive to extract your brain's information.
- Reviving people in simulation is impossible.
- The technology is never developed to run people in simulation.
- Running people in simulation is outlawed.
- No one is interested running you in simulation.
- It is too expensive to run you in simulation.
To see people's detailed responses have a look at the googledoc, but bottom line numbers were:
|person||chance of failure||odds of success|
(These are all rounded, but one of the two should have enough resolution for each person.)
The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:
|person||chance of failure|
My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:
Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.
Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).
Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.
In the responses to their comment they go into more detail.
Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.
Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".
I also posted this on my blog
 Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>
 For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.
At the end of CFAR's July Rationality Minicamp, we had a party with people from the LW/SIAI/CFAR community in the San Francisco Bay area. During this party, I had a conversation with the girlfriend of a participant in a previous minicamp, who was not signed up for cryonics (her boyfriend was). The conversation went like this:
me: So, you know what cryonics is?
me: And you think it's a good idea?
me: And you are not signed up yet?
me: And you would like to be?
me: Wait a minute while I get my laptop.
And I got my laptop, pointed my browser at Rudi Hoffman's quote request form1, and said, "Here, fill out this form". And she did.
Though cryonics has been practiced for forty years, its techniques have improved only slowly; its few customers can only induce a tiny research effort. The much larger brain research community, in contrast, has been rapidly improving their ways to do fast cheap detailed 3D brain scans, and to prepare samples for such scans. You see, brain researchers need ways to stop brain samples from changing, and to be strong against scanning disruptions, just so they can study brain samples at their leisure.
Most people, given the option to halt aging and continue in good heath for centuries, would. Anti-aging research is popular, but medicine is only minimally increasing lifespan for healthy adults. You, I, and everyone we know have bodies that are incredibly unlikely to make it past 120. They're just not built to last.
But what are you, really? Your personality, your memories, they don't leave you when you lose a leg. Lose most parts of your body and you're still you. Lose your brain and that's it.  You are a pattern, instantiated in the neurons of your brain. That pattern is sustained by your body, growing and changing as you learn and experience the world. Your body supports you for years, but it deteriorates and eventually isn't up to the task any more. Is that 'game over'?
Perhaps we could scan people's brains at extremely high detail so we could run them in some sort of human emulator. This requires a thorough understanding of the brain, huge amounts of storage, unbelievably fast computers, and very detailed scanning. If it's even possible, it may be several hundred years away.
Our bodies aren't going to last that long, but what if we could figure out how to preserve our brains so that the information didn't decay? Then, if the future turned out to be one in which we had advanced brain emulation and scanning technology, we could be revived. I don't know if people in the future would want to spend the time or money to revive us, but in a future with technology this advanced, reviving a preserved brain as a computer simulation could be really cheap.
The most advanced technology for long-term tissue preservation today  is cryonics: freezing with vitrification. You add something to the blood that keeps ice crystals from forming and then freeze it. This is pretty much the same thing frogs do, hibernating frozen through the winter. The biggest organs that have been successfully brought back to working order after vitrification are rabbit kidneys, and the brain is a lot bigger and much more complex. While there are people applying this technique to human brains after death, it's very much a one way street; we can't revive them with current technology.
How much should it worry us that we can't reverse this freezing process? If we're already talking about revival via high-detail scanning and emulation, which is only practical after hundreds of years of technological development, does it matter that we can't currently reverse it? The real question in determining whether vitrification is sufficient is whether we're preserving all the information in your brain. If something critical is missing, or if something about our current freezing process loses information, the brains we think are properly preserved might be damaged or deteriorated beyond repair. Without a round trip test where we freeze and then revive a brain, we don't know whether what we're doing will work.
Another issue is that once you've frozen the brain you need to keep it cold for a few centuries at least. Liquid nitrogen is pretty cheap, but providing it constantly over such a long time is hard. Organizations fall apart: very few stay in business for even 100 years, and those that do often have departed from their original missions. Current cryonics organizations seem no different from others, with financial difficulties and imperfect management, so I don't think 200+ years of full functioning is very likely.
Even if nothing goes wrong with the organization itself, will our society last that long? Nuclear war, 'ordinary' war, bioterrorism, global warming, plagues, and future technologies all pose major risks. Even if these don't kill everyone, they might disrupt the cryonics organizations or stop technological development such that revival technology is never developed.
Taking all these potential problems and risks into account, it's unlikely that you can get around death by signing up for cryonics. In attempts to calculate overall odds for success from estimated chances of each step I've seen various numbers: 1:3, 1:4, 1:7, 1:15 and 1:400. I'm even more pessimistic: I calculated 1:600 when I first posted to lesswrong and have since revised down to 1:1000. To some people the probability doesn't matter, but because it's expensive and there are plenty of other things one can do with money, I don't think it's obviously the sensible thing to do.
(I also posted this on my blog.)
 Well, lose your heart and you're gone too. Except that we can make mechanical hearts and you stay the same person on receiving one. Not so much with a mechanical brain.
 Plastination is also an option, but it's not yet to a point where we can do it on even a mouse brain.
Summary: medical progress has been much slower than even recently predicted.
In the February and March 1988 issues of Cryonics, Mike Darwin (Wikipedia/LessWrong) and Steve Harris published a two-part article “The Future of Medicine” attempting to forecast the medical state of the art for 2008. Darwin has republished it on the New_Cryonet email list.
Darwin is a pretty savvy forecaster (who you will remember correctly predicting in 1981 in “The High Cost of Cryonics”/part 2 ALCOR’s recent troubles with grandfathering), so given my standing interests in tracking predictions, I read it with great interest; but they still blew most of them, and not the ones we would prefer them to’ve.
The full essay is ~10k words, so I will excerpt roughly half of it below; feel free to skip to the reactions section and other links.
From Mike Darwn's Chronopause, an essay titled "Would You Like Another Plate of This?", discussing people's attitudes to life:
The most important, the most obvious and the most factual reason why cryonics is not more widely accepted is that it fails the “credibility sniff test” in that it makes many critical assumptions which may not be correct...In other words, cryonics is not proven. That is a plenty valid reason for rejecting any costly procedure; dying people do this kind of thing every day for medical procedures which are proven, but which have a very low rate of success and (or) a very high misery quotient. Some (few) people have survived metastatic head/neck cancer – the film critic Roger Ebert, is an example (Figure 1). However, the vast majority of patients who undergo radical neck surgery for cancer die anyway. For the kind and extent of cancer Ebert had, the long term survival rate (>5 years) is ~5% following radical neck dissection and ancillary therapy: usually radiation and chemotherapy. This is thus a proven procedure – it works – and yet the vast majority of patients refuse it.
View more: Next