Cryonics: Can I Take Door No. 3?
If you don't believe in an afterlife, then it seems you currently have two choices: cryonics or permanent death. Now, I don't believe that cryonics is pseudoscience, but it's still pretty poor odds (Robin Hanson uses an estimate of 5% here). Unfortunately, the alternative offers a chance of zero. I see five main concerns with current cryonic technology:
- There is no proven revival technology, thus no estimate of costs
- Potential damage done during vitrification which must be overcome
- Because it cannot be legally done before death, potential decay between legal death and vitrification
- Requires active maintenance at very low temperature
- No guarantee that future societies will be willing to revive
So I wonder if we can do better.
I recall reading of juvenile forms of amphibians in desert environments that could survive for decades of drought in a dormant form, reviving when water returned. One specimen had sat on a shelf in a research office for over a century (in Arizona, if I recall correctly) and was successfully revived. Note: no particular efforts were made to maintain this specimen: the dry local climate was sufficient. It was suggested at the time that this could make an alternative method of preserving organs. Now the advantages of this approach (which I refer to flippantly as "dryonics") is:
- Proven, inexpensive revival technology
- Apparently the process does not cause damage itself
- Proven revival technique may overcome legal obstacles of applying before legal death
- Requires passive maintenance at low humidity (deserts would be ideal)
- Presumably lower cost makes future revival more likely (still no guarantee, but that is a post in itself)
There is one big disadvantage of this approach, of course: no one knows how to do it (it's not entirely clear how the juvenile amphibians do it) or even if it would be possible in larger, more complex organisms. And, so far as I know, no one is working on it. But it would seem to offer a much better prospect than our current options, so I would suggest it worth investigating.
I am not a biologist, and I'm not sure where one would start developing such a technology. I frankly admit that I am sharing this in the hope that someone who does have an idea will run with it. If anyone knows of any work on these lines, or has an idea how to proceed, please send a comment or email. Or even if you have another alternative. Because right now, I don't consider our prospects good.
[Note: I am going on memory in this post; I really wish I could provide references, but there does not seem much activity along these lines that I can find. I'm not even sure what to call it: mummification? Probably too scary. Dehydration? Anyway feel free to add suggestions or link references.]
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (111)
Unfortunately, inserting complex novel gene sequences into every cell of an organism in a way that doesn't just cause massive, global cancer is very hard problem. Making those sequences do what you want them to do, and not, say, kill the target organism is even harder. Especially since human anatomy isn't well suited to the task, and would need to be modified. By the time we have the technology to do something like that, death is probably already a solved problem.
That said, I've used the premise in a science fiction book before. The main characters were members of Homo Sapiens Durabilis, and had genomes modified with tardigrade genetics. They could be pumped full of hydrogen sulfide, and reversibly dehydrated to death for long-term space travel, or during a medical emergency.
By the way, what was the name of the book?
"Morse Code". But it wasn't working thematically, and I abandoned the project. I've written a few other stories in the same universe.
Why is global cancer the primary risk rather than not being viable at all?
The traditional way of inserting a gene into the genome is to use a retrovirus with its DNA replaced. Most such viruses (at least, that have been used) incorporate randomly, meaning that there is a small but nonzero chance every time a new cell is modified that it will knock out a gene that is important for controlling cancer. On a cellular level, the most likely cause of this is cell death, as the rest of the cell's anticancer mechanisms shut down the cell. But of course, this doesn't work every time.
There are specific viruses (i.e. that always integrate at the same, safe genomic location) currently being developed, and it's hoped that these will solve the problem.
However, there's actually another related problem. If you want to make major changes to the cell (like reprogramming it into a stem cell), the cell's anticancer mechanisms will detect that as well, so in order to make those changes you have to at least temporarily shut off some of those mechanisms. So there is a risk for cancer in that as well.
About the topic of this thread - generally, the ability to survive specific extreme environments (especially one that affects everything in the cell such as changes in water content or temperature) is a specialized adaptation. I would not be surprised if there are global differences in the genomes of these species, e.g. most proteins are much more hydrophilic, or there is a system of specialized chaperones (=proteins that refold other proteins or help prevent them from misfolding) plus the adaptations in proteins that allow the chaperones to act on them, and further systems to repair damage the chaperones don't prevent. It is unlikely that only a few genes would be involved, and unless a case can be made for evolutionary conservation of the adapted genes to humans, we wouldn't have most of them (in fact, any genome-wide changes would mean that we would have to adapt our own proteins in new ways, just because we don't share all of them with the species in question). Cold temperature is actually a special case here, because it slows down everything and thus reduces the amount of "equivalent normal-temperature time" that has passed. It's still difficult (and of course none of these are impossible), but I don't think it's likely that small-scale gene therapy would be sufficient.
I'm most familiar with gene therapy issues causing cancer, but that might be an availability bias - in those studies where gene therapy simply kills the relevant cells, I'm sure very little is published.
Cancer sounds like not-viable to me :P
I meant not even being viable enough to get cancer.
Would it require gene therapy? Could there not be a more direct method of intervention to achieve the result?
The physical structure of the cells have to change. You also don't see this sort of behavior in large organisms, so there may be serious engineering challenges with the dehydration mechanisms in large animals. You're essentially going to need powerful, global, highly specific gene therapy at the bare minimum. It might not be possible without engineering a new organism from scratch.
That's a fair question. I was assuming that creatures which can survive full dehydration are so different at the cellular level that nothing less than genetic redesign would do the job, but I'm guessing.
People die as the result of very moderate dehydration, so considerable change of some sort would be required.
It's plausible that if dehydration and revival are possible for people, then the methods wouldn't be much like what's evolved-- people don't fly the same way birds do.
I think part of the problem, too, is that animals who can survive full dehydration, or being thoroughly frozen within and without, are small. We don't often realize just how big humans are, even for land-dwelling tetrapods. We're very large, very active, and very resource-intensive -- certainly there are bigger land animals about today and in our recent past, and very much bigger ones in the fossil record (brachiosaurs, anyone?), but even then we still qualify as megafauna.
The consequences of that size, especially in light of our activity level, are significant. Human physiology is very adapted to dissipate heat well (and our water intake is a big part of that), yet we still routinely have trouble doing it fast enough to avoid ill effects, forcing us to adapt culturally and individually to the problem. We have to conserve quantities (of temperature regulation, of water) at fairly specific levels; our physiology is critically dependent on them.
So, yeah -- if people can be put in suspended animation of some sort (regardless of mechanism), it's gonna have to take our particular case into account. You can flash-freeze a mouse, thaw it, and get biological activity after (they don't exactly go on to live long and prosperous mousey lives, but they do come out the other side for a bit). A mouse is tiny; you can't extend that to a human without different physics becoming relevant. You can dehydrate a tardigrade quickly (just let it do its thing in a low-moisture environment for long enough until it loses enough water) and then leave it sitting until it gets doused again; you can't do that to a human, because we have a lot of water to lose, our bodies fight to hang on to it, our health declines rapidly as we lose even modest amounts, and we proceed straight to death once quantities are insufficient.
I didn't find any amphibians which survived complete dehydration, but I found an insect.
The useful word is anhydrobiosis-- but no amphibians are mentioned.
Also, http://en.wikipedia.org/wiki/Tardigrade - who survive vacuum of space.
I bow to your superior Google Fu. It may have been invertebrates rather than amphibians,as I said, I was going from memory. (I can already improve the post!)
I had really hoped to promote discussion on the concept for human preservation. I had looked through the cryonics links and hadn't noticed any discussion around this concept. In fact, I have never seen it suggested as an alternative, but thought this community would be a great place to kick it around. Thanks for your response.
Dehydration seems like a cool idea in the abstract, but I don't know nearly enough biology to say whether there's any way to get from here to there.
The jargon term is "cryptobiosis", and revival is "anabiosis".
The main problem with dehydration as I understand it is similar to that of cryopreservation, but worse: dehydration causes cells to shrink which damages organs. It also concentrates cellular components (salts, proteins, etc.) to the point where they start interacting with each other harmfully.
That said, it's an interesting starting point. Mike Darwin has proposed replacing cellular water with some kind of solvent carrying monomers that form a hard polymer under controlled conditions, possibly similar to Amber. Once it polymerizes and forms a glass, the cell's components would be unable to interact with each other. (The organism would be cooled to -20C using M22 for cryoprotection beforehand to minimize metabolic damage.)
The hard thing is getting a high concentration of anything into cells without rupturing them. Organisms like Tardigrades that achieve cryptobiosis (= anhydrobiosis) manufacture their own polymers such as Trehalose. Plants do the same with Sucrose. Cells have a special transport protein that yanks glucose (the most common sugar monomer) inside through the lipid membrane very quickly relative to their natural diffusion rate. Note that this is useful for cryoprotection, not just against dehydration.
Lately I've been wondering if Foldit could be used to design proteins that pull other things into cells faster. Could such an enzyme be programmed to embed itself in the cell wall? Perhaps something more like a virus could do this. Or perhaps a custom protein could turn glucose into a more suitable polymer under the right conditions.
Believing in afterlife doesn't grant you one more option. This is a statement about ways of mitigating or avoiding death, and beliefs are not part of that subject matter. An improved version of the statement would say, "If there is no afterlife, then...". In this form, it's easier to notice that since it's known with great certainty that there is no afterlife, the hypothetical isn't worth mentioning.
I'm convinced that the probability of experiencing any kind of afterlife in this particular universe is extremely small. However, some versions of us are probably now living in simulations, and it is not inconceivable that some portion of them will be allowed to live "outside" their simulations after their "deaths". Since one cannot feel one's own nonexistence, I totally expect to experience "afterlife" some day.
The word "expectation" refers to probability. When probability is low, as in tossing a coin 1000 times and getting "heads" each time, we say that the event is "not expected", even though it's possible. Similarly, afterlife is strictly speaking possible, but it's not expected in the sense that it only holds insignificant probability. With its low probability, it doesn't significantly contribute to expected utility, so for decision making purposes it's an irrelevant hypothetical.
Well, this sounds right, but seems to indicate some problem with decision theory. If a cat has to endure 10 rounds of Schrödinger's experiments with 1/2 probability of death in each round, there should be some sane way for the cat to express its honest expectation to observe itself alive in the end.
This kind of expectation is useful for planning actions that the surviving agent would perform, and indeed if the survival takes place, the updated probability (given the additional information that the agent did survive) of that hypothetical would no longer be low. But it's not useful for planning actions in the context where the probability of survival is still too low to matter. Furthermore, if the probability of survival is extremely low, even planning actions for that eventuality or considering most related questions is an incorrect use of one's time. So if we are discussing a decision that takes place before a significant risk, the sense of expectation that refers to the hypothetical of survival is misleading.
See also this post: Preference For (Many) Future Worlds.
I just want to throw this in here because it seems a good place: to me it seems that you would want yourself to reason as if only worlds where you survive count, but others would want you to reason as if every world where they survive counts, so the game-theoretic expected outcome is the one where you care about worlds in proportion to people in them with whom you might end up wanting to interact. I think this matches our intuitions reasonably well.
Except for the doomsday device part, but I think evolution can be excused for not adequately preparing us for that one.
PS: there is a wonderfully pithy way of stating quantum immortality in LW terms: "You don't believe in Quantum Immortality? But after your survival becomes increasingly unlikely all valid future versions of you will come to believe in it. And as we all know, if you know you will be convinced by something might as well believe it now .. "
I think you may be treating your continuation as a binary affair (you either exist or don't exist, you either experience or don't experience) as if "you" (your mind) were an ontologically simple entity.
Let's say that in the vast majority of universes you "die" from an external perspective. This means that from an internal perspective, in the vast majority of universe you'll experience the degradation of your mental circuitry -- whether said degradation lasts ten years or one millisecond, you will experience said degradation up to the point you will no longer be able to experience anything.
So let's say that at some point your mind is at a state where you're still sensing experiences, but don't form new memories, nor hold any old memories; and because you don't even have much of a short-term memory, your thinking doesn't get more complicated than "Fuzzy warmth. Nice" or perhaps "Pain. Hurts!".
At this point, this experience is all you effectively are -- it's not as if this circuitry will be metaphysically connected to a single specific set of memories, or a single specific personality.
Perhaps at this point you can argue, that you totally expect this mental pattern to be reattached to some set of memories or some personality outside the Matrix. And therefore it will experience an afterlife -- in a sense. But not necessarilly an afterlife with memories or personality that have anything to do with your present memories or personality, right?
Quantum Immortality doesn't exist. At best one can hope for Quantum Reincarnation -- and even that requires certain unverified assumptions...
There should be some universes in which the simulators will perform a controlled procedure specifically designed for saving me. This includes going to all the trouble of reattaching what's left of me to all my best parts and memories retrieved from an adequate backup.
Of course, it is possible that the simulators will attach some completely arbitrary memories to my poor degraded personality. This nonsensical act will surely happen in some universes, but I do not expect to perceive myself as existing in these cases.
It seems you are right that gradual degradation is a serious problem with QI-based survival in non-simulated universes (unless we move to a more reliable substrate, with backups and all).
True. Believing doesn't grant more options, but if you truly believe in an afterlife, then this is not a question that would concern you: you believe you have a better option. :)
If you believe in an afterlife, the question that concerns you is still whether there is an afterlife, not whether you believe in an afterlife. So you still should worry about the hypothetical of there being an afterlife, which you'd assign more probability, not about the hypothetical of you believing in an afterlife.
I think we are assigning different meanings to "believe". In my sense, a true believer has no doubt, so "whether" is no longer a question. I think we may be getting sidetracked on semantics, though.
Chemical fixation (sometimes called "plastination", although this conflates the practice with an unrelated procedure) is an in-progress technology to preserve brains at room temperature, and is being evaluated alongside cryonics by the Brain Preservation Foundation: http://www.brainpreservation.org/
It would probably be cheaper than cryonics, and would require much less long-term support - you can throw the brain in a shoebox instead of constantly maintaining it in liquid nitrogen. It still lacks a revival mechanism though - the current hope seems to be preserving enough information to get it back via slicing and scanning later.
Upvoted for clear setting of a line of reasoning
Pros for Down: Vague logic jump based on surface phenomena, organising rather than executing work
Pros for Up: Acknowledging ignorance, clear explicitation and individualization of stance
Thanks for the clear feedback. I can see that posting to this forum is going to be a humbling, if valuable, experience :). Any thoughts for improvement?
As many of the comments have pointed out the point raised is not the only viewpoint. Running with the new situation from different angles could have produced fruitful thought that could have been applied with the post.
Cryonics has details worked out while the hydronics hasn't. Thus it's somewhat likely that you are comparing the weak points of cryonics to good points of dryonics. Hunting for a better method it's all good but it can make the comparison accidentally better than it would be after a closer investigation. The cryonics side of the comparison is fixed while the new method side works with just what is apparent.
Say that I think of methods to move in space beside rockets. I might think of dropping behind nuclear bombs to improve energy extracted per mass used. This might be all nice while only thinking about pushing a craft forward. However if I stop to think about other implications the situation doesn't seem too rosey: there might be radioactive products left behind, there can be significant forces to nearby other vessels or habitats, it would be trivial to weaponize. These disadvantages might be overcome with some design but it's far from "go faster" kind of magic button. And I don't need high technical abilitity to realise that those sorts of drawbacks are possible.
With dryonics it likely needs some support from cell chemistry. Changing the cell chemistry on a already alive human could be somewhat messy. And even if it would be adjustable it is somewhat likely that human cells do interesting things that conflict with such "design constraints". How much immune system efficiency, alcohol tolerance or metabolism speed would be ok price to pay for the advantage? Even if successfully dried people would require less energy upkeep protecting them from erosion might bring the cost closer to high tech upkeep. At room temperature the surrounding bacteria can be active. Would they be vulnerable to winds, sounds or earthquakes?
If we only want methods that work in principle regardless of details you can always plan for a round trip in the stars to use the twin paradox to be subject to the expertise of future doctors. The question is only whether the details of time dilation, cryonics or dryonics are doable. Thus skipping or being ignorant of the details doesn't help that much. Finding a new preservation mechanism mainly extends the frontier where concrete progress can be made. So eventually before long you have to dig deeper. And doing today what you could do tomorrow ensures you don't get stuck in the past.
I certainly didn't intend to imply that this was the only viewpoint, or even that it was necessarily better, only that it addressed some of the issues with what seemed to be the only current possibility. I agree that it would require considerable research into how to achieve it: my point is that these would be upfront costs, whereas cryonics has backloaded costs (technological as well as financial). I also did not mean that a "hydronically" preserved organism (I like your term) could be stored anywhere, simply that it is easier to establish passive storage. Egyptian mummies lasted thousands of years in their dry, desert tombs, but can decay rapidly when exposed to moister climes. Bacteria need warmth and water to be active: removing one or the other is sufficient. We already preserve food at room temperature using the same principle (salt or sugar both preserve food by dehydrating bacteria).
The fact is, we do not currently have a reliable means of arresting a human's metabolic processes (including post-mortem decay) and restoring them. We don't have the details for restoring cryonically preserved persons. "Advanced nanotech" is just a mysterious answer until we know how to do it. The intention of the post was to stimulate thought (which I think it has done). I do not believe I have to have all the answers before I can ask the questions. New ideas arise from making new connections between existing concepts, and sometimes this means concepts existing in two different minds.
Personally, I'd rather just go on existing here and now. Preservation is just a backup option, much like backing up your computer files: you'd rather not have a system crash, but if you do, you can recover. On the other hand, cryonics is our only current "backup" option, so the choice is a "no-brainer". Even a slim chance is preferrably to no chance.
Agreed, but I don't know where to begin digging. Which is why I threw this open to the forum.
I'm not sure I understand what you mean by that: don't put off what you can do today?
Making small firm steps at a time is easily supported. Taking only a single step for not knowing how to take more is very probably underapplying ones knowledge. If the reasoning can go on with basically a empty reply from another party it's likely thought was suppressed very early. If one strives to take things to their logical conclusion this is a bad thing.
If it's not clear do understand that the post was supportable. I could just convince of ways it could have been awesomer. I could have communicated better what kinds of more sharper thinking could have happened in writing this post or atleast not detract attention (needlessly lenghten) with on topic content from the thinking options available. Instead of just settling for the first step one could say to one say : "I need to go deeper" que inception music. And you propably want to do that in the first place instead of waiting around for a demanding reason to do it.
I have just recently starting to vote what I read and explicitly state my reason for that decision. Not all people want to have every detail rubbed against their face. When asked I can elaborate. I might not be adept enough in rationality foruming to offer a detailed analysis of what went wrong or help what can be done that such shortcomings don't happen in the future. Because of known tendency that people don't tend to cast themselfs as villains in their story, for precaution, I will also mention that this is likely to be a newbie-newbie interaction as discussed on the "eternal september" threads.
But I do vote and say why I vote and I hope that that is more valuable than my explanations being misleading/confusing is detrimental. I don't know, I am experimenting whether it works. I could easily be that the long explanation is just noise with the signal being in those word or phrase like descriptions.
I appreciate the feedback, and the more detailed the better. I am always looking to improve my own effectiveness, especially in communication. One of my most frustrating, and unfortunately all too common, experiences is thinking something through, coming up with what turns out to be the correct answer, and being unable to convince others. (I am not suggesting that I have the right answer in this case; in fact, the odds are that I don't.) To me, the more specific the feedback, the better. So, for example, dissecting the post, saying "this is good", "this could use more support", "this does not follow", etc., is extremely helpful (to me, anyway).
As a measure of the value of your feedback, I have upvoted your responses, because I do find them useful. So I hope that provides some good feedback for your own experimenting :)
False dichotomy: Cryonics may fail (actually, will probably fail) to revive you. Or it may succed, and then you die anyway.
It seems a quite optimistic estimate. Successful revival depends conjunctively on a large number of events, many of which are highly speculative (no damage from preservation, super duper nanotech) or outright implausible (cryo orgs not succumbing to organizational failure).
MNT isn't strictly necessary. Anabolocytes, and other speculative genetically engineered cells. They are a little more likely than Freitas' nanomedicine because, well, cells exist; which is not an argument that works for MNT.
There's also whole-brain emulation, which doesn't require nanotech to function - just slightly better scanners, substantially better neuroscience, and exponentially better computers.
We have plenty of models of neurons and some of them imitate neurons very well.
Eugene Izhikevich simulated an entire human brain equivalent with his model and he saw some pretty interesting emergent behaviour (Granted, the anatomy had to be generated randomly at every iteration, so we still need better computers).
That's true, but we need to get it really, really close. Even relatively small statistical deviations from the behavior of the real neurons are probably intolerable. Besides, real neurons are not interchangeable: they have unique statistical biases and are influenced by a variety of factors not modeled by modern simulations, like neurotransmitter diffusion, glial activity, and subtle quirks of specific dendrites and axons.
Right now, even if you gave us a high-speed brain scanner, a high-speed computer, and an unlimited budget, we wouldn't have the capability to interpret the image data the scanner produced, or even be quite sure which immunostains to use for the optical imaging to pin down the required details. I expect it to take at least five to ten years for us to get the theoretical details ironed out.
It requires substantially better scanners, and a fixation process that preserves all the relevant features.
Vitrification seems to work pretty well, in terms of preserving relevant details. Observing some of those features is going to require an as-yet-not-fully-understood immunostaining process, but that's under neuroscience. As far as the scanners go, the resolution is already adequate or near-adequate for most SEM technologies. It's just a question of adding more beams and developing more automated methods, so the scanning can be more parallel.
Do you have any reference?
According to PZ Myers you can only do that with exceptionally small samples of tissue.
PZ Meyers has unreasonably high standards for 'relevant details.' Demanding one millisecond total fixation time (with every atom being in precisely the same position as it was during life) is totally ridiculous. If you want to study intraneuron cell biology, sure, you need that, but for brain emulation, all you care about is the connection-ism of the network, and the long term statistical biases of particular neurons' synaptic connections (plus glial traits, naturally), which is (probably) visible from features many orders of magnitude more durable than the kinds of data he's talking about. Also, his comments about accelerating the speed of the network are kind of bizarrely ignorant, given how smart a guy he clearly is.
The only way the issues he mentions are problematic is if high-detail inter-neuron computing turns out to be necessary AND long-term state dependent, which the evidence suggests against (the blue brain project has produced realistic synchronized firing activity in a simulated neocortical column using relatively simple neuron models).
As far as a reference goes, there's this study, in which they took a rat's brain, vitrified it, and examined it at fine detail, demonstrating "good to excellent" preservation of gross cellular anatomy.
Well, he's a developmental biologist specialized in the vertebrate nervous system.
One millisecond fixation time might be an excessive requirement, but in order to perform an emulation accurate enough to preserve the self, you will probably need much more detail than the network topology and some statistics. Synapses have fine surface features that may well be relevant, and neurons may have relevant internal state stored as DNA methylation patterns, concentrations of various chemical, maybe even protein folding states. Some of these features are probably difficult to preserve and possibly difficult to scan.
EDIT:
Actually they vitrified 475 micrometre slices of the hippocampus of rat brains. It's no mystery that small samples can be vitrified without using toxic concentrations of cryoprotectants.
Moreover, the paper says: "Finally, all slices were transferred to the two wells of an Oslo-type recording chamber [ ... ] and incubated with aCSF at 34–37 C for at least 1 h before being used in experiments."
"Following initial incubation for 60 min or more at 35 C in aCSF to allow recovery from the shock of slice preparation, [ ... ]"
I'm not a biologist so I might be missing something, but my understanding is that this means that somehow ischemia is not an issue here, while it certainly is when dealing with a whole brain.
The surface details we can read with SEM, and we can observe chemical/protein concentrations through immunostaining and sub-wavelength optical microscopy (SEM and SWOM hybrid is my bet for the technology we wind up using). I don't think there's strong evidence for DNA methylation or protein state being used for long-term data storage. If evidence arises, we'll re-evaluate then. But modern neuron models don't account for those, and, again, function realistically, so they're not critical for the computation. The details we're reading likely wouldn't have to be simulated outright - they would just alter the shape of the probability distribution your simulation is sampling from. A lot of the fine stuff is so noisy, it isn't practical to store data in it. The stuff we know is involved we can definitely preserve. As a general rule, if the data is lost within minutes of death, it's probably also lost during the average workday.
I honestly don't think cryoprotectant damage is anywhere near the big problem here. I'm sure it does cellular damage, but it seems to leave cell morphology essentially intact, and isn't reactive enough to really screw up most of the things we know we have to care about, in terms of cell state. Ischemia is a bigger problem, and one of my points of skepticism about non-standby cryonics. Four plus hours at room temperature simply seems too long. That said, as our understanding of cell death improves, we're starting to notice that most brain death seems to be failure of the cells' oxygen metabolisms, not failure of synaptic linkings. I'd like to see studies done on exactly how long it takes relevant neural details to begin to break down at room temperature. That said, flatlining cases suggest that there's some reason to hope for the time being. I'd like to see the science done, in any case.
What are these?
I never heard of them and Google doesn't yield meaningful results.
A special type of teacher's password.
Eudoxia is referencing Mike Darwin's idea of modifying white blood cells with arbitrarily-sophisticated biotechnology (we're talking "you can design new organelles to spec" as a lower-level requirement) to do active cell repair, sucking up cell contents and yoinking nuclear genetic information from even very-damaged cells before digesting the old contents and replacing them. It's an elaborate thought experiment with technical-looking diagrams that elides huge black boxes in its proposed mechanism. Basically it's the idea of nanomedicine before the term was coined.
The original dichotomy is correct if you think about the consequences of cryonic success.
IF and only if cryonics succeeds, the world had developed the technology to restore you from a cracked, solid mass of brain tissue. (the liquid nitrogen will fracture your brain because it cools it below the glass transition point)
Also, as sort of a secondary thing, it has figured out a way to give you a new body or a quality substitute. (it's secondary because growing a new body is technically possible, if unethical, today)
Anyways, this technical capacity means that almost certainly the technology exists to make backup copies of you. If this is possible, it would be also possible to keep you alive for billions of years, or some huge multiple of your original lifespan that it could be approximated as infinite.
You might consider these technical capabilities to be absurd, and lower that 5% chance to some vanishingly small number like many cryonics skeptics. However, one conclusion naturally falls from the other.
We don't know how to reliably clone a human being, and we definitely don't know how to transplant your brain into it or attach your head to it.
We've done body transplants in primates in the past. Hooking up the nerves is still tricky, but we could probably figure it out. Also, cloning one mammal is basically like cloning another. There's really no doubt we could clone a human being if we really wanted to. The trick is that current cloning mechanisms have a very high failure rate, and nobody wants to deal with the pile of dead babies and fetuses that would come out of such a process.
Realistically, though, 3d tissue printing is probably the way to go. We can already do several organs that way, and resolution is essentially the only limit to being able to fabricate most of the rest.
One team did one head transplant with one monkey in the 1960s (it is said to have survived a day and a half). Reattaching a completely severed spinal cord is still impossible, not "tricky" -- all attempts at head transplants have produced quadriplegics.
Wouldn't this be tantamount to regrowing a transected spine? I'm not up-to-date on that area, but I don't think we can do that yet.
I meant in the future. I think severe spinal cord damage is still a little beyond us right now. Though with the progress we're making with stem cells, I'd guess we're likely to take some steps on that front in the near-ish future.
Perhaps, but I don't think it's so easy.
During embryonic development, the nervous system begins as a single strip of specialized ectoderm, the neural plate, which folds on itself to form the neural tube that later becomes the spinal chord and the brain, while nerves grow out of it towards the other parts of the body. It never happens that two separate pieces of neural tissues become attached.
AFAIK, If you inject stem cells in the severed spine of a rat and play with growth chemical signals, you may get the formation of new neural tissue that makes more or less random connections with the existing tissue which may recover some function (if it doesn't cause cancer), but that doesn't seem to be a precise process.
I wonder whether the lizard tail regeneration involves the extension of a functional spinal chord.
We can and we can't. Here's an 11 year old article where rats successfully regained function : http://www.jneurosci.org/content/21/23/9334.abstract
That's just an example. I think that if society were far more tolerant of risks, and there was more funding, and the teams working on the problem were organized and led properly, then human patient successes would be seen in the near future.
Isn't that the funny thing? We'll take a certain loss over a risk of the same exact loss. Sigh.
This is not quite right. The justification is that an action leading to certain negative consequences is not equivalent to inaction leading to the same consequences. Inaction is almost always acceptable, morally and legally. There are many obvious and non-obvious pitfalls in changing this attitude.
True when comparing one actions with a non-conjugate declining-to-act (e.g. throwing someone off a building vs not saving someone from falling off a building)
In this case, we're looking at a fear of ineffectiveness - the case where acting could produce the same effect as not doing that exact same thing.
And yet, from a consequentialist standpoint, there shouldn't be. Regardless of potential pitfalls, this is unlikely to change: I suspect it's "hardwired" into our psychology. But there is also a reverse tendency, especially on the part of the public attitude towards leaders, where it is better to be seen to be doing something rather than nothing. Even if it is not clear what action should be taken.
Only if your reasoning is extremely reliable in estimating the consequences of your action or inaction. Otherwise you may end up doing more harm by acting than you would by inacting (happens all the time). I am guessing that this is a part of what keeps people from acting.
Isn't it closer to "take a certain loss over a risk of the same exact loss, plus a whole lot of money"?
Yes, that is part of it. I don't think that the flat financial loss is the killer issue in many cases where an unproven method could work, or not. When doing nothing is acceptable, trying something becomes fraught with the risk of being blamed for the failure.
That's a Pascal's wager argument.
I agree, but I did not want to overstate the case, so I used an estimate already provided in the forums. I certainly did not want the discussion to become about how likely recovery from cryonics is, and I am fairly happy with the results.
Alcor's magazine Cryonics just published my article titled "Cryonics and the Singularity." It's on page 21 of this:
http://www.alcor.org/cryonics/Cryonics2012-4.pdf
The article argues that if you believe in the likelihood of a coming singularity you should sign up for cryonics.
Uh, you skipped a step. The bottleneck is trait-selection/gene therapy more than it is knowing where the gene loci are. We know the signatures of Huntington's and some other genetic diseases, but that hasn't led to the ability to cure them. Right now, we can only negatively select through abortion, so that wouldn't create the geniuses you're looking for.
See my article "A Thousand Chinese Einsteins Every Year" for a more detailed explanation. I've learned a lot since writing this article (in 2007) and my latest views on the potential of eugenics are fully spelled out in my book Singularity Rising that will be released in a few weeks.
Another possible hard part: if world-shaking genius (not just being unusually smart) is the result of having the sort of mind which fits a solvable hard problem, then how would anyone know what traits to amplify and what education is needed?
Don't underestimate the power of g:
http://www.udel.edu/educ/gottfredson/reprints/2002ghighlygeneral.pdf
Linda Gottfredson's papers in general reward study:
http://www.udel.edu/educ/gottfredson/reprints/
An afterlife doesn't really solve the problems people want it to solve. For one thing, ghost hunters with cable reality series might bother you with inane requests like pushing buttons on flashlights. ; )
But more to the point, why do people assume that an "afterlife," if it exists, has to last forever, or that you have to have one to give this life "meaning"? This shows uncritical, self-centered teleological thinking about human existence.
Ha! I love this. My wife is always watching those shows, and I find their assumptions rather inane: I can't immediately explain this, so it must be paranormal.
Why this proposal is a bad one :
Cryonics is based upon a working technology, cryogenic freezing of living tissues.
The latest cryonics techiques use M22, an ice crystal growth inhibitor that has been used to preserve small organs and successfully thaw them. More than likely, if you were to rewarm some of the tissues from a cryonics patient frozen today, some of the original cells would still be alive and viable. I don't know if this particular experiment has been performed, however : there is a reason why cryonics has a bad reputation for pseudoscience.
If you dehydrate a mammalian cell and then add water again, it's still dead. If you freeze and rewarm, heating and cooling at a rapid enough rate to prevent ice crystal growth, not only is the cell alive, but it can be more viable than newer cells later. Cryogenically frozen sperm or ova from a young person can be more viable than the same substance obtained from the same person later in life.
There are further improvements to cryonics that have not been made because it lacks the funding and resources it deserves.
Better cryoprotectants are more than likely possible. Better techniques are almost certainly achievable. The method used to preserve a viable rabbit kidney used extremely rapid cooling. Cooling the brain more rapidly might yield better results. There are potentially revolutionary improvements possible.
Allegedly, a Japanese company claims that oscillating magnetic fields prevent orderly crystal growth by water. They have experimental results and succes in preserving human teeth this way. If this method is viable, cryonics could use very large magnets on the human brain and potentially get perfect preservations with demonstrable proof of viability. http://www.teethbank.jp/ http://singularityhub.com/2011/01/23/food-freezing-technology-preserves-human-teeth-organs-next/
The first source I think is a better one : As far as a google search will tell me, this is the only existing human tooth bank in the world. If the teeth weren't viable it seems unlikely that credible dentists would be attempting the transplants and succeeding. (I think the technology being used is a lot better indication of it being legitimate than papers or singularity hub articles)
Depends on what you mean by "working". When we successful freeze and revive a mammal, I will concede the point. And its still our best backup option (to not dying). Cryonics has a head start on other possibly techniques, because it was the first conceived and there are people working on it. That doesn't mean it's the best or only possibility.
My proposal was for further research, not to start doing it. I admitted we don't know how to achieve a non-hydrated state capable of recovery, or even if it can be achieved. And this was certainly not intended to be an attack on the work being done on cryonics, just a suggestion that there may be other ways. Speaking of which: DARPA seems to be working on yet another approach. I think as a society we have sufficient resources to pursue various options. I have no horse in this race, I just want to see the finish! :)
Cite please?
Physical and biological aspects of renal vitrification.
Cryopreservation of organs by vitrification: perspectives and recent advances (PDF).
EDIT: I should clarify, the kidney was cooled with liquid nitrogen vapor and the lowest temperature it was exposed to was still fifty degrees above that of Liquid Nitrogen. This is important because LN2 temperature is far below the vitrification point of M22, and cooling even a little below T_g causes fracturing.
Yes, but it doesn't fracture everywhere. Hence, if you rewarmed a tissue that was cryogenically frozen, some cells would probably still be viable. Hence, my hypothesis that if you took samples from a current patient where things were done right, some of the cells would still be alive.
A related article : http://www.nature.com/ncomms/journal/v3/n6/full/ncomms1890.html?WT.mc_id=FBK_NCOMMS
What about a fracture that severs the brain in several pieces?
There are fractures like that in existing patients. Note that my hypothesis is that some of the cells would still be viable. I did not say any neurons were viable. I'm merely saying that cryonics is provably better than dehydration or plastination because of this viability factor.
Despite this, IF patients frozen using current techniques can ever be revived, the techniques used will more than likely require a destructive scan of their brains, followed by loading into some kind of hardware or software emulator.
Trying to think of what this might subjectively be like is hard to view rationally. I don't know if a good emulation or replica is the same person or not : you can make solid arguments either way.
Extremely advanced, better versions of cyronics might eventually reach the point of actually preserving the brain in a manner where reheating brings it back to life and a transplant is possible. However, a destructive scan and upload might still remain the safer choice.
Regardless of how the revivals were actually done in practice, if reproducible and public demonstrations of viability were every performed, I would expect that cryonics would gain widespread prevalence, mainstream acceptance, and become a standard medical procedure.