(OR)
How I'm now on the fence about whether to sign up for cryonics
I'm not currently signed up for cryonics. In my social circle, that makes me a bit of an oddity. I disagree with Eliezer Yudkowsky; heaven forbid.
My true rejection is that I don't feel a visceral urge to sign up. When I query my brain on why, what I get is that I don't feel that upset about me personally dying. It would suck, sure. It would suck a lot. But it wouldn't suck infinitely. I've seen a lot of people die. It's sad and wasteful and upsetting, but not like a civilization collapsing. It's neutral from a point of pleasure vs suffering for the dead person, and negative for the family, but they cope with it and find a bit of meaning and move on.
(I'm desensitized. I have to be, to stay sane in a job where I watch people die on a day to day basis. This is a bias; I'm just not convinced that it's a bias in a negative direction.)
I think the deeper cause behind my rejection may be that I don't have enough to protect. Individuals may be unique, but as an individual, I'm fairly replaceable. All the things I'm currently doing can and are being done by other people. I'm not the sole support person in anyone's life, and if I were, I would be trying really, really hard to fix the situation. Part of me is convinced that wanting to personally survive and thinking that I deserve to is selfish and un-virtuous or something. (EDIT: or that it's non-altruistic to value my life above the amount Givewell thinks is reasonable to save a life–about $5,000. My revealed preference is that I obviously value my life more than this.)
However, I don't think cryonics is wrong, or bad. It has obvious upsides, like being the only chance an average citizen has right now to do something that might lead to them not permanently dying. I say "average citizen" because people working on biological life extension and immortality research are arguably doing something about not dying.
When queried, my brain tells me that it's doing an expected-value calculation and the expected value of cryonics to me is is too low to justify the costs; it's unlikely to succeed and the only reason some people have positive expected value for it is that they're multiplying that tiny number by the huge, huge number that they place on the value of my life. And my number doesn't feel big enough to outweigh those odds at that price.
Putting some numbers in that
If my brain thinks this is a matter of expected-value calculations, I ought to do one. With actual numbers, even if they're made-up, and actual multiplication.
So: my death feels bad, but not infinitely bad. Obvious thing to do: assign a monetary value. Through a variety of helpful thought experiments (how much would I pay to cure a fatal illness if I were the only person in the world with it and research wouldn't help anyone but me and I could otherwise donate the money to EA charities; does the awesomeness of 3 million dewormings outway the suckiness of my death; is my death more or less sucky than the destruction of a high-end MRI machine), I've converged on a subjective value for my life of about $1 million. Like, give or take a lot.
Cryonics feels unlikely to work for me. I think the basic principle is sound, but if someone were to tell me that cryonics had been shown to work for a human, I would be surprised. That's not a number, though, so I took the final result of Steve Harris' calculations here (inspired by the Sagan-Drake equation). His optimistic number is a 0.15 chance of success, or 1 in 7; his pessimistic number is 0.0023, or less than 1/400. My brain thinks 15% is too high and 0.23% sounds reasonable, but I'll use his numbers for upper and lower bounds.
I started out trying to calculate the expected cost by some convoluted method where I was going to estimate my expected chance of dying each year and repeatedly subtract it from one and multiply by the amount I'd pay each year to calculate how much I could expect pay in total. Benquo pointed out to me that calculation like this are usually done using perpetuities, or PV calculations, so I made one in Excel and plugged in some numbers, approximating the Alcor annual membership fee as $600. Assuming my own discount rate is somewhere between 2% and 5%, I ran two calculations with those numbers. For 2%, the total expected, time-discounted cost would be $30,000; for a 5% discount rate, $12,000.
Excel also lets you do calculations on perpetuities that aren't perpetual, so I plugged in 62 years, the time by which I'll have a 50% chance of dying according to this actuarial table. It didn't change the final results much; $11,417 for a 5% discount rate and $21,000 for the 2% discount rate.
That's not including the life insurance payout you need to pay for the actual freezing. So, life insurance premiums. Benquo's plan is five years of $2200 a year and then nothing from then on, which apparently isn't uncommon among plans for young healthy people. I could probably get something as good or better; I'm younger. So, $11,00 for total life insurance premiums. If I went with permanent annual payment, I could do a perpetuity calculation instead.
In short: around $40,000 total, rounding up.
What's my final number?
There are two numbers I can output. When I started this article, one of them seemed like the obvious end product, so I calculated that. When I went back to finish this article days later, I walked through all the calculations again while writing the actual paragraphs, did what seemed obvious, ended up with a different number, and realized I'd calculated a different thing. So I'm not sure which one is right, although I suspect they're symmetrical.
If I multiply the value of my life by the success chance of cryonics, I get a number that represents (I think) the monetary value of cryonics to me, given my factual beliefs and values. It would go up if the value of my life to me went up, or if the chances of cryonics succeeding went up. I can compare it directly to the actual cost of cryonics.
I take $1 million and plug in either 0.15 or 0.00023, and I get $150,000 as an upper bound and $2300 as a lower bound, to compare to a total cost somewhere in the ballpark of $40,000.
If I take the price of cryonics and divide it by the chance of success (because if I sign up, I'm optimistically paying for 100 worlds of which I survive in 15, or pessimistically paying for 10,000 worlds in which I survive in 23), I get the total expected cost per my life being saved, which I can compare to the figure I place on the value of my life. It goes down if the cost of cryonics goes down or the chances of success go up.
I plug in my numbers and get a lower bound of $267,000 and an upper bound of 17 million.
In both those cases, the optimistic success estimates make it seem worthwhile and the pessimistic success estimates don't, and my personal estimate of cryonics succeeding falls closer to pessimism. But it's close. It's a lot closer than I thought it would be.
Updating somewhat in favour that I'll end up signed up for cryonics.
Fine-tuning and next steps
I could get better numbers for the value of my life to me. It's kind of squicky to think about, but that's a bad reason. I could ask other people about their numbers and compare what they're accomplishing in their lives to my own life. I could do more thought experiments to better acquaint my brain with how much value $1 million actually is, because scope insensitivity. I could do upper and lower bounds.
I could include the cost of organizations cheaper than Alcor as a lower bound; the info is all here and the calculation wouldn't be too nasty but I have work in 7 hours and need to get to bed.
I could do my own version of the cryonics success equation, plugging in my own estimates. (Although I suspect this data is less informed and less valuable than what's already there).
I could ask what other people think. Thus, write this post.
Ah, ok. I'm going to have to double-reply here, and my answer should be taken as a personal perspective. This is actually an issue I've been thinking about and conversing over with an FHI guy, I'd like to hear any thoughts someone might have.
Basically, we want to extract a coherent set of terminal goals from human beings. So far, the approach to this problem is from two angles:
1) Neuroscience/neuroethics/neuroeconomics: look at how the human brain actually makes choices, and attempt to describe where and how in the brain terminal values are rooted. See: Paul Christiano's "indirect normativity" write-up.
2) Pure ethics: there are lots of impulses in the brain that feed into choice, so instead of just picking one of those, let's sit down and do the moral philosophy on how to "think out" our terminal values. See: CEV, "reflective equilibrium", "what we want to want", concepts like that.
My personal opinion is that we also need to add:
3) Population ethics: given the ability to extract values from one human, we now need to sample lots of humans and come up with an ethically sound way of combining the resulting goal functions ("where our wishes cohere rather than interfere", blah blah blah) to make an optimization metric that works for everyone, even if it's not quite maximally perfect for every single individual (that is, Shlomo might prefer everyone be Jewish, Abed might prefer everyone be Muslim, John likes being secular just fine, the combined and extrapolated goal function doesn't perform mandatory religious conversions on anyone).
Now! Here's where we get to the part where we avoid fucking things up! At least in my opinion, and as a proposal I've put forth myself, if we really have an accurate model of human morality, then we should be able to implement the value-extraction process on some experimental subjects, predictively generate a course of action through our model behind closed doors, run an experiment on serious moral decision-making, and then find afterwards that (without having seen the generated proposals before) our subjects' real decisions either matched the predicted ones, or our subjects endorse the predicted ones.
That is, ideally, we should be able to test our notion of how to epistemically describe morality before we ever make that epistemic procedure or its outputs the goal metric for a Really Powerful Optimization Process. Short of things like bugs in the code or cosmic rays, we would thus (assuming we have time to carry out all the research before $YOUR_GEOPOLITICAL_ENEMY unleashes a paper-clipper For the Evulz) have a good idea what's going to happen before we take a serious risk.
So, if I've understood your proposal, we could summarize it as:
Step 1: we run the value-extractor (seed AI, whatever) on group G and get V.
Step 2: we run a simulation of using V as the target for our optimizer.
Step 3: we show the detailed log of that simulation to G, and/or we ask G various questions about their preferences and see whether their answers match the simulation.
Step 4: based on the results of step 3, we decide whether to actually run our optimizer on V.
Have I basically understood you?
If so, I have two points, one simple and boring, one more... (read more)