I like this post. Upvoted.
On a tangiential node, I had an experience today that made me take cryonics much more seriously. I had a (silly, in retrospect) near-miss with serious injury, and I realized that I was afraid. Ridiculously, helplessly, calling-on-imaginary-God-for-mercy afraid. I had vastly underestimated how much I cared about my own physical safety, and how helpless I become when it's threatened. I feel much less cavalier about my own body now.
So, you know, freezing myself looks more appealing now that I know that I'm scared. I can see why I'd want to have somewhere to wake up to, if I died.
Your comment suggests a convenient hack for aspiring rationalists to overcome their fear of cryonics.
Since you mentioned Benjamin Franklin, apparently when he died he left two trust funds to demonstrate the power of compound interest over a couple of centuries. The example of these trusts shows that the idea of a reanimation trust staying intact for centuries doesn't sound absurd:
http://en.wikipedia.org/wiki/Benjamin_Franklin#Death_and_legacy
You forgot the most optimistic of all:
Within the immortalist community, cryonics is the most pessimistic possible position.
Indeed; I think the cryonics organizations themselves have a saying, "Cryonics is the second worst thing that can happen to you."
Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future.
This doesn't alter your overall point much but this seems unlikely. Aside from the issue of the high probability of something going drastically wrong after more than a few centuries, low-level background radiation as well as intermittent chemical reactions will gradually create trouble. Unfortunately, estimating the timespan for when these will be an issue seems difficult but the general level seems to be somewhere between about 100 to a 1000 years...
Good post, upvoted.
I think that your remark
But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action.
assumes a utility function which may not be universal. In particular, at present I feel that the value of my personal survival into transhuman times is dwarfed by other considerations. But certainly your points are good ones for people who place high value on personally living into transhuman times to bear in mind.
Although it's not marked as the inspiration, this post comes straight after an article by many-decades cryonicist Charles Platt, which he wrote for Cryonics magazine but which was rejected by the Alcor board:
Platt discusses what he sees as the dangerously excessive optimism of cryonics, particularly with regard to financial arrangements: that because money shouldn't be a problem, people behave as though it therefore isn't a problem. When it appears clear that it is. To quote:
...In fact their determination to achieve and defend their
After reading Eliezer on it, I with certainty to sign up for cryonics, but I figured I'd wait until I had a more stable lifestyle. I'm currently traveling through Asia - Saigon, Vietnam right now, Kuala Lumpur, Malaysia next. I figure if the lights go off while I'm here, it's not particularly likely I'd make it to a cryonics facility in reasonable time.
Also, it's the kind of thing I'd like to research a bit, but I know that's a common procrastination technique so I'm not putting too much weight on that.
Nice post, though it avoided the reason why I don't intend to get cryopreserved. That is, because it's way too expensive.
I think cryonics is a waste of money unless you want to make living copies of a dead person or otherwise have a reason to preserve information about the dead. Cryonics does not prevent the death of you, it just prevents the destruction of the leftovers as well.
What about - The SAI can reborn me no matter how long I will be dead and how poor my remains will be then?
It seems awfully convenient that your posited process of personal identity survives exactly those events, blinking ones eyes, epileptic fits, sleep, coma, which are not assumed to disrupt personal identity in everyday thought.
The philosophical habit of skeptically deconstructing basic appearances seems to prepare people badly for the task of scientifically understanding consciousness. When considering the relationship between mind and matter, it's a little peculiar to immediately jump to complicated possibilities ("whatever process creates my conscious feeling of self ... could be dissolved and created anew several times a second") or to the possibility that appearances are radically misleading (consciousness might be constantly "going away or coming into being" without any impact on the apparent continuity of experience or of personal existence). Just because there might be an elephant around the next corner doesn't mean we should attach much significance to the possibility.
I'm not entirely sure how the process of discovery you describe would happen... The situation is of course likely to be different once we have a better understanding of exactly how the brain works, but lacking that understanding, I'm having some trouble envisioning exactly how the destruction of personal identity could be determined to be intractably tied to the observed entity X.
It is unlikely that society would develop the capacity for mind uploading and cryonic resurrection without also coming to understand, very thoroughly, how the brain works. We may think we can imagine these procedures being performed locally in the brain, with the global result being achieved by brute force, without a systemic understanding. But to upload or reanimate you do have to know how to put the pieces back together, and the ability to perform local reassembly of parts correctly, in a physical or computational sense, also implies some ability to perform local reassembly conceptually.
In fact it would be reasonable to argue that without a systemic understanding, attempts at uploading and cryonic restoration would be a game of trial and error, producing biased copies which deviate from their originals in unpredictable ways. Suppose you use high-resolution fMRI time series to develop state-machine simulations of microscopic volumes in the brain of your subject (each such "voxel" consisting of a few hundred neighboring neurons). You will be developing a causal model of the parts of the subject's brain by analysing the time series. It's easy to imagine the analysis assuming that interactions only occur between neighboring voxels, or even next-nearest neighbors, and thereby overlooking long-range interactions due to long axonal fibers. The resulting upload will have lost some of the causal structure of its prototype.
The possibility of elementary errors like this, to say nothing of whatever more subtle mistakes may occur, implies that we can't really trust procedures like this without simultaneously developing that "better understanding of exactly how the brain works".
I can't think of any phenomenon in classical mechanics where I could point to any property of the system that would be disrupted if the system got disassembled and reassembled mid-evolution.
How about the property of being an asymptotically bound system, in the absence of active disassembly by external forces? To me that still seems way too weak to be the ontological basis of physical identity, but that is (more or less) the philosopher Mario Bunge's definition of systemhood. (Btw, Bunge was a physicist before he was a philosopher.)
The philosophical habit of skeptically deconstructing basic appearances seems to prepare people badly for the task of scientifically understanding consciousness. When considering the relationship between mind and matter, it's a little peculiar to immediately jump to complicated possibilities
It wasn't philosophers who came up with general relativity and quantum mechanics when everyday intuition about nature didn't quite add up in some obscure corner cases. Coming up with a simple model that seems to resolve contradictions even if it doesn't quite fit eve...
Within the immortalist community, cryonics is the most pessimistic possible position. Consider the following superoptimistic alternative scenarios:
Cryonics -- perfusion and vitrification at LN2 temperatures under the best conditions possible -- is by far less optimistic than any of these. Of all the possible scenarios where you end up immortal, cryonics is the least optimistic. Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future. It can work under the conditions of the slowest technological growth imaginable. All it assumes is that the organization (or its descendants) can survive long enough, technology doesn't go backwards (on average), and that cryopreservation of a technically sufficient nature can predate reanimation tech.
It doesn't even require the assumption that today's best possible vitrifications are good enough. See, it's entirely plausible that it's 100 years from now when they start being good enough, and 500 years later when they figure out how to reverse them. Perhaps today's population is doomed because of this. We don't know. But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action. The fact is that we are late for the party. In retrospect, we should have started preserving brains hundreds of years ago. Benjamin Franklin should have gone ahead and had himself immersed in alcohol.
There's a difference between having a fear and being immobilized by it. If you have a fear that cryonics won't work -- good for you! That's a perfectly rational fear. But if that fear immobilizes you and discourages you from taking action, you've lost the game. Worse than lost, you never played.
This is something of a response to Charles Platt's recent article on Cryoptimism: Part 1 Part 2