Cryonics fills many with disgust, a cognitively dangerous emotion. To test whether a few of your possible cryonics objections are reason or disgust based, I list six non-cryonics questions. Answering yes to any one question indicates that rationally you shouldn’t have the corresponding cryonics objections.
1. You have a disease and will soon die unless you get an operation. With the operation you have a non-trivial but far from certain chance of living a long, healthy life. By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities. Do you get the operation?
Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.
2. You have the same disease as in (1), but now the operation costs far more than you could ever obtain. Fortunately, you have exactly the right qualifications NASA is looking for in a space ship commander. NASA will pay for the operation if in return you captain the ship should you survive the operation. The ship will travel close to the speed of light. The trip will subjectively take you a year, but when you return one hundred years will have passed on Earth. Do you get the operation?
Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.
3. Were you alive 20 years ago?
Answering yes to (3) means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.
4. Do you believe that there is a reasonable chance that a friendly singularity will occur this century?
Answering yes to (4) means you should think it possible that someone cryogenically preserved would be revived this century. A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate. Given that bacteria seem to have mastered nanotechnology, it’s hard to imagine that a billion scientists working for a billion years wouldn’t have a reasonable chance of mastering it. Also, a friendly post-singularity AI would likely have enough respect for human life so that it would be willing to revive.
5. You somehow know that a singularity-causing intelligence explosion will occur tomorrow. You also know that the building you are currently in is on fire. You pull an alarm and observe everyone else safely leaving the building. You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated. Do you leave the building?
Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.
6. One minute from now a man pushes you to the ground, pulls out a long sword, presses the sword’s tip to your throat, and pledges to kill you. You have one small chance at survival: grab the sword’s sharp blade, thrust it away and then run. But even with your best efforts you will still probably die. Do you fight against death?
Answering yes to (6) means you can’t pretend that you don’t value your life enough to sign up for cryonics.
If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments. What other questions can you think of that provide a non-cryonics way of getting at cryonics objections?
Let's say you're about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don't have much of a choice about which way you're going, given that the "room" you're currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?
Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you're putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).
No. AI isn't a gun; it's a bomb. If you don't know what you're doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.