There are also some less-traditional paths-to-lose:
Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else's)
You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).
Simulation is possible, but it is for some reason much "thinner" than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn't happen to die).
You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that's OK. And it does this to all corpsicles it finds but not to any other dead people.
I have strong opinions of the likeliness of these (I'd put one at p>99% and another at p<1%) but in any case they're worth mentioning.
Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.
I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.
There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.
Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]
The questions were:
To see people's detailed responses have a look at the googledoc, but bottom line numbers were:
(These are all rounded, but one of the two should have enough resolution for each person.)
The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:
My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:
In the responses to their comment they go into more detail.
Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.
Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".
I also posted this on my blog
[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>
[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.