...it would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes.
Is subadditivity a one-way ratchet such that we can reliably infer that people are wrong to be more optimistic about cryonics after seeing fewer failure steps?
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
If I was trying to use this effect for a Grey Arts explanation (conveying a better image of what I honestly believe to be reality, without any false statements or omissions, but using explanatory techniques that a Dark Arts practitioner could manipulate to make people believe something else instead, e.g., writing a story as a way of conveying an idea) I would try to diagram cryonics possibilities into a tree where I believed the branches of a given level and the leaf nodes all had roughly equal probability, and just showing the tree would recruit the equal-leaf-size effect to cause the audience to concretely represent this probability estimate.
...This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is
There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.
Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]
The questions were:
To see people's detailed responses have a look at the googledoc, but bottom line numbers were:
(These are all rounded, but one of the two should have enough resolution for each person.)
The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:
My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:
In the responses to their comment they go into more detail.
Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.
Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".
I also posted this on my blog
[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>
[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.