...A fault tree showing all the reasons why a car might not start was shown to several groups of experienced mechanics.96 The tree had seven major branches--insufficient battery charge, defective starting system, defective ignition system, defective fuel system, other engine problems, mischievous acts or vandalism, and all other problems--and a number of subcategories under each branch. One group was shown the full tree and asked to imagine 100 cases in which a car won't start. Members of this group were then asked to estimate how many of the 100 cases were attributable to each of the seven major branches of the tree. A second group of mechanics was shown only an incomplete version of the tree: three major branches were omitted in order to test how sensitive the test subjects were to what was left out. If the mechanics' judgment had been fully sensitive to the missing information, then the number of cases of failure that would normally be attributed to the omitted branches should have been added to the "Other Problems" category. In practice, however, the "Other Problems" category was increased only half as much as it should have been. This indicated that the mecha
It would have been interesting if they had done a third group and added spurious categories (probably wouldn't work with experienced mechanics) and/or broke down legitimate categories into many more sub categories than necessary. What would that have done to the "other problems" category?
This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.
My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:
in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.
To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elic
Science has moved away from considering memories to be simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity" (Sacktor, 2007). Enzyme activity ceases after death, which could lead to memory destruction.
For instance, in a slightly unnerving study, Sacktor and colleagues taught mice to avoid the taste of saccharin before injecting them with a PKMzeta-blocking drug called ZIP into the insular cortex. PKM, an enzyme, has been associated with increasing receptors between synapses that fire together during memory recollection. Within hours, the mice forgot that saccharin made them nauseous and began guzzling it again. It seems blocking the activity of PKM destroys memories. Since PKM activity (like all enzyme activity) also happens to be blocked following death, a possible extension of this research is that the brain automatically "forgets" everything after death, so a simulation of your brain after death would not be very similar to you.
Accessing long term memory appears to be a reconstructive process, which additionally results in accessed memories becoming fragile again; this is what I believe is occurring here. The learned aversion is reconstructed and as then susceptible to damage much more than other non-recently accessed LTM. Consider that the drug didn't destroy ALL of the mice's (fear?) memories, only that which was most recently accessed.
So no worries to cryonics!
I think Robin's reply to that comment (which he left there last week) got to the heart of the issue:
No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.
It may be that what the brain uses to store some vital information is utterly destroyed by cryonics, but there is some other feature of the arrangement of atoms in the brain, possibly some side effect that has no function in the living brain, that is sufficiently correlated with the information we care about that we can reverse-engineer what we need from it. This is the "hard drive" argument for cryonics (I got it from the Sequences, but I would suspect it didn't originate there): it's not that hard (I think, though I do not know much about this topic) to erase data from a hard drive so that the normal functionality of the hard drive can't bring it back, but it's rather difficult to erase it in a way that someo...
There's a possibly-important probability missing from your analysis.
For it to be worth paying for cryonics, it has to (1) work and (2) not be redundant. That is: revival and repair has to become feasible and not too expensive before your cryonics company goes bust, disappears in a collapse of civilization, etc. -- but if that happens within your lifetime then you needn't have bothered with cryonics in the first place.
So the success condition is: huge technical advances, quite soon, but not too soon.
Whether this matters depends on (a) whether it's likely that if revival and repair become viable at all they'll do so in the next few decades, and (b) whether, in that scenario, the outcome is so glorious that you simply won't care that you poured a pile of money into cryonics that you could have spent on books, or sex&drugs&rock&roll, or whatever.
The cost of life insurance scales with your risk of death in the covered period: if cryonics is rendered redundant then you can stop paying for the life insurance (and any cryonics membership dues) thereafter.
Redundancy would be a significant worry if, counterfactually, you had to pay a non-refundable lump sum in advance.
To me this just looks like a bias-manipulating "unpacking" trick - as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up. I could equally make cryonics success sound almost certain by lumping all the failure categories together into one or two big things to be probability-assigned, and unpacking all the disjunctive paths to success into finer and finer subcategories. Which I don't do, because I don't lie.
Also, yon neuroscientist does not understand the information-theoretic criterion of death.
There's another effect of "unpacking", which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.
Also, yon neuroscientist does not understand the information-theoretic criterion of death.
They appear to, they are questioning whether current cryonic practice preserves said information at all - they are saying it will destroy it.
No they're not, they're describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can't information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, "These two cognitively distinct states will map to molecularly indistinguishable end states". I'm not saying you have to use that exact phrasing but it's what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.
Are you referring to the neuroscientist's discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:
Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. (...) (information simply isn't there to be read, regardless of how advanced the reader may be).
In our lingo: the state transformation is a non-injective function (=loss of information).
However, the import of the distance between a "best guess" facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night's sleep? Before and after a TBI injury (yay pleonasm)?
Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?
Speculatively, I'd rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.
Otherwise, we'd incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.
These appear to be saying just what I thought they were saying - current cryonics practice destroys the information - and, given the above, I don't see sufficient evidence to assume your reading.
At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:
Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.
Irretrievably? I'd be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he'd essentially have to demonstrate that he isn't thi...
What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about "information destruction" but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.
Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler's highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There's also a lot of clueless objections along lines of "But they won't just spring back to life when you warm them up" which don't bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It's not a question of "fail to disprove", i...
This is, of course, not anywhere in anything that kalla724 or I said.
If you complain about how it would be hard to in-situ repair denatured proteins - instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it's physically impossible to tell if the starting protein was in conformation X or conformation Y - then you're complaining about the difficulty of repairing functional damage, i.e., the brain won't work after you switch it back on, which is completely missing the point.
If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it's entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn't prev...
You're missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.
"Warm 'em up and see if they spring back to life" was a possible revival method that cryonicists already didn't believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.
as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up
The idea that when people disagree over complex topics that they should break their disagreement down is one I've learned in part from Robin Hanson and in fact he applies it cryonics
While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying "that seems very unlikely" and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying "oh, that seems potentially serious". If I do this more I'll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don't think that will change the outcome much.
I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.
unpacking all the disjunctive paths to success into finer and finer subcategories
As far as I can tell there's really only one...
Upvoted the post. Worthy thing to discuss.
A reply to kalla724 that you did not mention is here: http://lesswrong.com/lw/d4a/brief_response_to_kalla724_on_preserving_personal/
Kalla724 claims that it is not possible to upload a C. elegans with particular memories and/or behaviors. I think that this is a testable claim and should shed light on kalla724's views on preserving personal identity with vitrification. I also think it is likely wrong.
It would be very interesting to see cryonics for very simple brains of other species. This could determine or narrow down the range of probability for several factors.
Edit: Removed doubled word
There is a helpful web page on the probability that cryonics will work.
There are also some useful facts at the Alcor Scientists' Cryonics FAQ.
The neuroscientist might wish to pay attention to the answer to "Q: Can a brain stop working without losing information?" The referenced article by Mayford, Siegelbaum, and Kandel should be particularly helpful.
What is the chance that some other means are found of simulating your personality without physical access to your brain (preserved or otherwise)?
Would you like to consider the possibility of cryonic preservation / plastination becoming redundant in your estimates?
Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).
This is probably true, isn't it? Most of what makes you, you, is in your brain, but another large part of it is mediated by hormones going to and from the rest of your body... I think. Yet most LWers who are into cryo go the 'neuro' route. Is there some reason why this consideration is not nearly as big a deal as I think? Is the idea that making a 'spare' human body is cheap?
I just ran the numbers assuming I pay US $3000 /year (I forget Hoffman's actual figure) for 33 years (mind you, I think deathtimer.com is too pessimistic there) discounted at 3% /year (the average annual inflation rate since 1913 equals 3.24%). The EPA set the value of a human life at $9.1 million two years ago. Perhaps I'm rigging the numbers by updating this for (actual) inflation and only discounting it by the 1/1500 probability. But I first estimated the value of my own life at $20 million, and I don't think I'd actually kill myself in return for (say) an SI donation that size.
The 'official' numbers would appear to make cryonics under-priced by $1403 in present value. (Edited to use official figures.)
"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.
The cryonics process is often analoguously compared the the event of a harddrive being broken, and the data being retrievable, but brains and harddrives store information in very different ways and this problem always strikes me as very unnerving. Without going into too much deta...
Question: Why do people here seem to only focus on the technical aspects of cryonics, and assume "future society will revive you-who-are-frozen" as a given? I can't see much reason to do this, other than as a historical curiosity.
There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.
Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]
The questions were:
To see people's detailed responses have a look at the googledoc, but bottom line numbers were:
(These are all rounded, but one of the two should have enough resolution for each person.)
The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:
My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:
In the responses to their comment they go into more detail.
Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.
Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".
I also posted this on my blog
[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>
[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.