ciphergoth comments on Normal Cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (930)
Here's a simple metric to demonstrate why alternatives to cryonics could be preferred:
Suppose we calculate the overall value of living as the quantity of life multiplied by the quality of life. For lack of a better metric, we can rate our quality of life from 1 to 100. Thus one really good year (quality = 100) is equal to 100 really bad years (ql = 1). If you think quality of life is more important, you can use a larger metric, like 1 to 1000. But for our purposes, let's use a scale to 100.
Some transhumanists have calculated that your life expectancy without aging is about 1300 years (because there's still an annual probability that you will die from an accident, homicide, etc.). Conservatively, let's assume that if cryonics and revivification are successful, you can expect to live for another 1000 years. Also, knowing nothing else about the future, your quality of life will be ~50. Thus your total life-index points gained is 50,000. But suppose that the probability that cryonics/revivification will be successful is 1 in 10,000, or .0001. Thus the expected utility points gained is .0001 * 50,000 = 50.
It will cost your $300/year for the rest of your life to gain those expected 50 points. But suppose you could spend that $300 a year on something that is 80% likely to increase your quality of life by 5 points a year (only 5%) for the rest of your life (let's say another 50 years). There are all kinds of things that could do that: vacations, games, lovers, whatever. That's .80 * 5 * 50 = 200 expected utility points.
You're better off spending your money on things that are highly likely to increase your quality of life here and now, then on things that are highly unlikely or unknown to increase your quantity and quality of life in the future.
I think this hugely underestimates both the probability and utility of reanimation. If I am revived, I expect to live for billions of years, and to eventually know a quality of life that would be off the end of any scale we can imagine.
I can't argue that cryonics would strike me as an excellent deal if I believed that, but that seems wildly optimistic.
This seems an odd response. I'd understand a response that said "why on Earth do you anticipate that?" or one that said "I think I know why you anticipate that, here are some arguments against...". But "wildly optimistic" seems to me to make the mistake of offering "a literary criticism, not a scientific one" - as if we knew more about how optimistic a future to expect than what sort of future to expect. These must come the other way around - we must first think about what we anticipate, and our level of optimism must flow from that.
Not always - minds with the right preference produce surprising outcomes that couldn't be anticipated, of more or less anticipated good quality. (Expected Creative Surprises)
But that property is not limited to outcomes of good quality, correct?
Agreed - but that caveat doesn't apply in this instance, does it?
It does apply, the argument you attacked is wrong for a different reason. Amusingly, I see your original comment, and the follow-up arguments for incorrectness of the previous arguments as all wrong (under assumptions not widely accepted though). Let's break it up:
(1) "If I am revived, I expect to live for billions of years"
(2) "That seems wildly optimistic"
(3) "We must first think about what we anticipate, and our level of optimism must flow from that"
(3) is wrong because the general pattern of reasoning from how good the postulated outcome is to its plausibility is valid. (2) is wrong because it's not in fact too optimistic, quite the opposite. And (1) is wrong because it's not optimistic enough. If your concepts haven't broken down when the world is optimized for a magical concept of preference, it's not optimized strongly enough. "Revival" and "quality of life" are status quo natural categories which are unlikely to survive strong optimization according to the whole of human preference in a recognizable form.
Do you think that if someone frozen in the near future is revived, that's likely to happen after a friendly-AI singularity has occurred? If so, what's your reasoning for that assumption?
Sure, I'm talking about heuristics. Don't think that's a mistake, though, in an instance with so many unknowns. I agree that my comment above is not a counter-argument, per se, just explaining why your statement goes over my head.
Since you prefer specificity: Why on Earth do you anticipate that?