Cryonics has a more serious problem which I seldom see addressed. I've noticed a weird cognitive dissonance among cryonicists where they talk a good game about how much they believe in scientific progress, technological acceleration and so forth - yet they seem totally unconcerned about the fact that we just don't see this alleged trend happening in cryonics technology, despite its numerous inadequacies. In fact, Mike Darwin argues that the quality of cryopreservations has probably regressed since the 1980's.
In other words, attempting the cryogenic preservation of the human brain in a way which makes sense to neuroscientists, which should become the real focus of the cryonics movement, has a set of solvable, or at least describable, problems which current techniques could go a long way towards solving without having to invoke speculative future technologies or friendly AI's. Yet these problems have gone unsolved for decades, and not for the lack of financial resources. Just look at some wealthy cryonicists' plans to waste $100 million or more building that ridiculous Timeship (a.k.a. the Saulsoleum) in Comfort Texas.
What brought about this situation? I've made myself unpopular by ...
James D. Miller does this in his Singularity Rising book. I leave articulating the logical problem with this claim as an exercise to the reader)
I would be grateful if you would tell me what the logical problem is.
As a counterpoint, let me offer my own experience rediscovering cryonics through Eliezer.
Originally, I hadn't seen the point. Like most people, I assumed cryonauts dreamed that one day someone would simply thaw them out, cure whatever killed them, and restart their heart with shock paddles or something. Even the most rudimentary understanding of or experience with biology and freezing temperatures made this idea patently absurd.
It wasn't until I discovered Eliezer's writings circa 2001 or so that I was able to see connections between high shock-level concepts like uploading, nanotech, and superintelligence. I reasoned that a successful outcome of cryonics is not likely to come through direct biological revival, but rather through atomically precise scanning, super-powerful computational reconstruction, and reinstantiation as an upload or in a replacement body.
The upshot of this reasoning is that for cryonics to have any chance of success, a future must be assured in which these technologies would be safely brought to bear on such problems. I continue to have trouble imagining such a future existing if the friendly AI problem is not solved before it is too late. As friendly A...
It feels to me like the general pro-cryo advocacy here would be a bit of a double standard, at least when compared to general memes of effective altruism, shutting up and multiplying, and saving the world. If I value my life equally to the lives of others, it seems pretty obvious that there's no way by which the money spent on cryonics would be a better investment than spending it on general do-gooding.
Of course, this is not a new argument, and there are a few standard responses to it. The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general. And it's certainly true that I do actually value my own life more than I value the life of a random stranger, but I do that because I'm human and can't avoid it, not because my values would endorse that as a maximally broad rule. If I get a chance to actually act in accordance to my preferred values and behave more altruistically than normal, I'll take it.
The other standard argument is that cryonics doesn't need to come out of my world-saving budget, it can come out of my leisure budget. Which is also true, but it r...
I've had thoughts along similar lines. But it seems like there's a "be consistent about your selfishness" principle at work here. In particular, if...
It seems kind of inconsistent to not be signed up for cryonics.
(Caveat: not sure I can make consistent sense of my preferences involving far-future versions of "me".)
Consistency is a good thing, but it can be outweighed by other considerations. If my choices are between consistently giving the answer '2 + 2 = 5' on a test or sometimes giving '2 + 2 = 5' and other times ' 2 + 2 = 4', the latter is probably preferable. Kaj's argument is that if you core goal is EA, then spending hundreds of thousands of dollars on cryonics or heart surgery is the normatively wrong answer. Getting the wrong answer more often is worse than getting it less often, even when the price is a bit of inconsistency or doing-the-right-thing-for-the-wrong-reasons. When large numbers of lives are at stake, feeling satisfied with how cohesive your personal narrative or code of conduct is is mostly only important to the extent it serves the EA goal.
If you think saving non-human animals is the most important thing you could be doing, then it may be that you should become a vegan. But it's certainly not the case that if you find it too difficult to become a vegan, you should therefore stop trying to promote animal rights. Your original goal should still matter (if it ever mattered in the first place) regardless of how awkward it is for you to explain and justify your behavioral inconsistency to your peers.
(Disclaimer: I absolutely promise that I am not evil.)
The first one is that I don't actually value my life equally to that of everyone else's life, and that it's inconsistent to appeal to that when I don't appeal to it in my life in general.
Question: why the hell not? My brain processed this kind of question for the first time around fourth grade, when wanting special privileges to go on a field trip with the other kids despite having gotten in trouble. The answer I came up with then is the one I still use now: "why me? Because of Kant's Categorical Imperative" (that is, I didn't want to live in a world where nobody went on the field trip, therefore I should get to go on it -- though this wasn't exactly clear thinking regarding the problem I really had at the time!). I would not want to live in a world where everyone kept their own and everyone else's lifestyle to an absolute minimum in order to act with maximal altruism. Quite to the contrary: I want everyone to have as awesome a life as it is physically feasible for them to have!
I also do give to charity, do pay my taxes, and do support state-run social-welfare programs. So I'm not advocating total selfishness...
In context, it seems uncharitable to read "risk my life" to include any risk small enough that taking it would still be consistent with valuing one's own life far above $1700.
Should a monk who has taken vows have a sin budget, because the flesh is weak?
If that helps them achieve their vows overall.
I did try valuing the lives of others equally before. It only succeeded in making me feel miserable and preventing me from getting any good done. Tried that approach, doesn't work. Better to compromise with the egoist faction and achieve some good, rather than try killing it with fire and achieve nothing.
Of course it is. Has it ever been presented as anything else
Once people start saying things like "It really is hard to find a clearer example of an avoidable Holocaust that you can personally do something substantial about now" or "If you don't sign up your kids for cryonics then you are a lousy parent", it's hard to avoid reading a moral tone into them.
I don't think that's a sufficient or effective compromise. If I'm given a choice between saving the life of my child, or the lives of a 1000 other children, I will always save my child. And I will only feel guilt to the extent that I was unable to come up with a 3rd option that saves everybody.
I don't do it for some indirect reason such as that I understand my children's needs better or such. I do it because I value my own child's life more, plain and simple.
As I mentioned in a private message to Hallquist, I favor a wait and see approach to cryonics.
This is based on a couple observations:
So it seems there isn't a huge downside to simply...
Initially I wanted to mention that there is one more factor: the odds of being effectively cryopreserved upon dying. I.e. being in a hospital amenable to cryonics and with a cryo team standing by, with enough of your brain intact to keep your identity. This excludes most accidental deaths, massive stroke, etc. However, the CDC data for the US http://www.cdc.gov/nchs/fastats/deaths.htm show that currently over 85% of all deaths appear to be cryo-compatible:
...
- Number of deaths: 2,468,435
- Death rate: 799.5 deaths per 100,000 population
- Life expectancy: 7
I have a view on this that I didn't find by quickly skimming the replies here. Apologies if it's been hashed to death elsewhere.
I simply can't get the numbers to add up when it comes to cryonics.
Let's assume a probability of 1 of cryonics working and the resulting expected lifespan to be until the sun goes out. That would equal a net gain of around 4 billion years or so. Now, investing the same amount of money in life extension research and getting, say a 25% chance of gaining a modest increase in lifespan of 10 years for everyone would equal 70bn/4 = 17...
How do you invest $50,000 to get a 25% chance of increasing everyone's lifespan by 10 years? John Schloendorn himself couldn't do that on $50K.
Reviewing the numbers you made up for sanity is an important part of making decisions after making up numbers.
The obvious assumption to question is this:
Given that cryonics succeeds, is what you purchase really equal to what you purchase by saving yourself from a life-threatening disease? You say that you don't place an extremely high value on your own life, but is it the case that the extra life you purchase with cryonics (takes place in the far-future*, is likely significantly longer) than the extra life you are purchasing in your visualization (likely near-future, maybe shorter [presumable 62 years?]). Relevant considerations:
The length difference depends on h...
I'd probably sign up if I were a US citizen. This makes me wonder if it's rational to stay in Finland. Has there been any fruitful discussion on this factor here before? Promoting cryonics in my home country doesn't seem like a great career move.
For me, there's another factor: I have children.
I do value my own life. But I also value the lives of my children (and, by extension, their descendants).
So the calculation I look at is that I have $X, which I can spend either to obtain a particular chance of extending/improving my own life OR I can spend it to obtain a improvements in the lives of my children (by spending it on their education, passing it to them in my will, etc).
I suppose I belong to that group that would like to see more people signing up for cryonics but have not done so myself. For myself, I am young and expect to live quite a while longer. I expect the chance of dying without warning in a way that could be cryopreserved to be rather low, whereas if I had much warning I could decide then to be cryopreserved (so the loss is my chance of losing consciousness and then dying in a hospital without regaining consciousness). I currently am not signed up for life insurance, which would also mean the costs of cryopreser...
(The following assumes that you don't actually want to die. My honest assessment is that I think you might want to die. I don't believe there's anything actually wrong with just living out your life as it comes and then dying, even if living longer might be nicer, and particularly when living longer might totally suck. So don't assume I'm passing judgement on a particular decision to sign up or not; in fact, the thought that life might suck forever drives me damn near favoring suicide myself.)
Let's tackle the question from another angle.
I do not believe...
I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.
While I'm definitely desensitized to suffering of others, seeing dead and dying people has made my own mortality all the more palpable. Constantly seeing sick people has also made scenarios of personal disability more available, which generally makes me avoid bad health choices out of fear. End of life care where I'm from is in an abysmal condition and I don't ever want to experience it myself. I fear it far more than death itself.
I'm also on the fence and wondering if cryonics are worth it (especially since I'm in France where there is no real option for it, so in addition to costs it would likely mean changing country), but I think you made two flaws in your (otherwise interesting) reasoning :
It's neutral from a point of pleasure vs suffering for the dead person
It forgets opportunity costs. Dying deprive the person of all the future experience (s)he could have, so of a huge amount of pleasure (and potentially suffering too).
...So: my death feels bad, but not infinitely bad. Ob
Just like future cryonics research might be able to revive someone who was frozen now, perhaps future time travellers could revive people simply by rescuing them from before their death. Of course, time travellers can't revive people who died under all circumstances. Someone who dies in a hospital and has had an autopsy couldn't be rescued without changing the past.
Therefore, we should start a movement where dying people should make sure that they die inside hermetically sealed capsules that are placed in a vault which is rarely opened. If time travel i...
Some context googled together from earlier LW posts about this topic:
From the recent ChrisHallquists $500 thread a comment that takes the outside view and comes to a devasting conclusion: http://lesswrong.com/r/discussion/lw/jgu/i_will_pay_500_to_anyone_who_can_convince_me_to/acd5
In the discussion of a relevant blog post we have this critical comment: http://lesswrong.com/user/V_V/overview/
In the Neil deGrasse Tyson on Cryonics post a real Neuroscientist gave his very negative input: http://lesswrong.com/r/discussion/lw/8f4/neil_degrasse_tyson_on_cryonics/...
The possibility of a friendly ultra-AI greatly raises the expected value of cryonics. Such an AI would likely create a utopia that you would very much want to live in. Also, this possibility reduces the time interval before you would be brought back, and so makes it less likely that your brain would be destroyed before cryonics revival becomes possible. If you believe in the likelihood of a singularity by, say, 2100 then you can't trust calculations of the success of cryonics that don't factor in the singularity.
Even though LW is far more open to the idea of cryonics than other places, the general opinion on this site still seems to be that cryonics is unlikely to succeed (e.g. has a 10% chance of success).
How do LW'ers reconcile this with the belief that mind uploading is possible?
My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.
I'm with you on this. And I hope to see more di...
(OR)
How I'm now on the fence about whether to sign up for cryonics
I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid.
My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.
(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)
I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)
However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying.
When queried, my brain tells me that it's doing an expected-value calculation and the expected value of cryonics to me is is too low to justify the costs; it's unlikely to succeed and the only reason some people have positive expected value for it is that they're multiplying that tiny number by the huge, huge number that they place on the value of my life. And my number doesn't feel big enough to outweigh those odds at that price.
Putting some numbers in that
If my brain thinks this is a matter of expected-value calculations, I ought to do one. With actual numbers, even if they're made-up, and actual multiplication.
So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value. Through a variety of helpful thought experiments (how much would I pay to cure a fatal illness if I were the only person in the world with it and research wouldn't help anyone but me and I could otherwise donate the money to EA charities; does the awesomeness of 3 million dewormings outway the suckiness of my death; is my death more or less sucky than the destruction of a high-end MRI machine), I've converged on a subjective value for my life of about $1 million. Like, give or take a lot.
Cryonics feels unlikely to work for me. I think the basic principle is sound, but if someone were to tell me that cryonics had been shown to work for a human, I would be surprised. That's not a number, though, so I took the final result of Steve Harris' calculations here (inspired by the Sagan-Drake equation). His optimistic number is a 0.15 chance of success, or 1 in 7; his pessimistic number is 0.0023, or less than 1/400. My brain thinks 15% is too high and 0.23% sounds reasonable, but I'll use his numbers for upper and lower bounds.
I started out trying to calculate the expected cost by some convoluted method where I was going to estimate my expected chance of dying each year and repeatedly subtract it from one and multiply by the amount I'd pay each year to calculate how much I could expect pay in total. Benquo pointed out to me that calculation like this are usually done using perpetuities, or PV calculations, so I made one in Excel and plugged in some numbers, approximating the Alcor annual membership fee as $600. Assuming my own discount rate is somewhere between 2% and 5%, I ran two calculations with those numbers. For 2%, the total expected, time-discounted cost would be $30,000; for a 5% discount rate, $12,000.
Excel also lets you do calculations on perpetuities that aren't perpetual, so I plugged in 62 years, the time by which I'll have a 50% chance of dying according to this actuarial table. It didn't change the final results much; $11,417 for a 5% discount rate and $21,000 for the 2% discount rate.
That's not including the life insurance payout you need to pay for the actual freezing. So, life insurance premiums. Benquo's plan is five years of $2200 a year and then nothing from then on, which apparently isn't uncommon among plans for young healthy people. I could probably get something as good or better; I'm younger. So, $11,00 for total life insurance premiums. If I went with permanent annual payment, I could do a perpetuity calculation instead.
In short: around $40,000 total, rounding up.
What's my final number?
There are two numbers I can output. When I started this article, one of them seemed like the obvious end product, so I calculated that. When I went back to finish this article days later, I walked through all the calculations again while writing the actual paragraphs, did what seemed obvious, ended up with a different number, and realized I'd calculated a different thing. So I'm not sure which one is right, although I suspect they're symmetrical.
If I multiply the value of my life by the success chance of cryonics, I get a number that represents (I think) the monetary value of cryonics to me, given my factual beliefs and values. It would go up if the value of my life to me went up, or if the chances of cryonics succeeding went up. I can compare it directly to the actual cost of cryonics.
I take $1 million and plug in either 0.15 or 0.00023, and I get $150,000 as an upper bound and $2300 as a lower bound, to compare to a total cost somewhere in the ballpark of $40,000.
If I take the price of cryonics and divide it by the chance of success (because if I sign up, I'm optimistically paying for 100 worlds of which I survive in 15, or pessimistically paying for 10,000 worlds in which I survive in 23), I get the total expected cost per my life being saved, which I can compare to the figure I place on the value of my life. It goes down if the cost of cryonics goes down or the chances of success go up.
I plug in my numbers and get a lower bound of $267,000 and an upper bound of 17 million.
In both those cases, the optimistic success estimates make it seem worthwhile and the pessimistic success estimates don't, and my personal estimate of cryonics succeeding falls closer to pessimism. But it's close. It's a lot closer than I thought it would be.
Updating somewhat in favour that I'll end up signed up for cryonics.
Fine-tuning and next steps
I could get better numbers for the value of my life to me. It's kind of squicky to think about, but that's a bad reason. I could ask other people about their numbers and compare what they're accomplishing in their lives to my own life. I could do more thought experiments to better acquaint my brain with how much value $1 million actually is, because scope insensitivity. I could do upper and lower bounds.
I could include the cost of organizations cheaper than Alcor as a lower bound; the info is all here and the calculation wouldn't be too nasty but I have work in 7 hours and need to get to bed.
I could do my own version of the cryonics success equation, plugging in my own estimates. (Although I suspect this data is less informed and less valuable than what's already there).
I could ask what other people think. Thus, write this post.