There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.
Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]
The questions were:
- You die suddenly or in a circumstance where you would not be able to be frozen in time.
- You die of something where the brain is degraded at death.
- You die in a hospital that refuses access to you by the cryonics people.
- After death your relatives reject your wishes and don't let the cryonics people freeze you.
- Some law is passed that prohibits cryonics before you die.
- The cryonics people make a mistake in freezing you.
- Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).
- The current cryonics process is insufficient to preserve everything (even when perfectly executed).
- All people die (existential risks).
- Society falls apart (global catastrophic non-existential risks).
- Some time after you die cryonics is outlawed.
- All cryonics companies go out of business.
- The cryonics company you chose goes out of business.
- Your cryonics company screws something up and you are defrosted.
- It is impossible to extract all the information preserved in the frozen brain.
- The technology is never developed to extract the information.
- No one is interested in your brain's information.
- It is too expensive to extract your brain's information.
- Reviving people in simulation is impossible.
- The technology is never developed to run people in simulation.
- Running people in simulation is outlawed.
- No one is interested running you in simulation.
- It is too expensive to run you in simulation.
- Other.
To see people's detailed responses have a look at the googledoc, but bottom line numbers were:
person | chance of failure | odds of success |
Kelly | 35% | 1:2 |
Jim | 80% | 1:5 |
Mick | 89% | 1:9 |
Julia | 96% | 1:23 |
Ben | 98% | 1:44 |
Jeff | 100% | 1:1500 |
(These are all rounded, but one of the two should have enough resolution for each person.)
The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:
person | chance of failure |
Kelly | 0% |
Jim | 35% |
Mick | 15% |
Julia | 60% |
Ben | 33% |
Jeff | 95% |
My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:
Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).
Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.
In the responses to their comment they go into more detail.
Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.
Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".
I also posted this on my blog
[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>
[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.
https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art13.html
Is subadditivity a one-way ratchet such that we can reliably infer that people are wrong to be more optimistic about cryonics after seeing fewer failure steps?
...it would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes.
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to... (read more)