Vladimir_Nesov comments on The Threat of Cryonics - Less Wrong

36 Post author: lsparrish 03 August 2010 07:57PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (212)

You are viewing a single comment's thread. Show more comments above.

Comment author: TobyBartels 03 August 2010 10:37:58PM *  22 points [-]

Until very recently, I tended to think of cryonics as something nutty and tacitly assumed that cryonics organisations were a little shady. These weren't strong beliefs, and I knew that I had no real basis for them, so I would never have tried to argue them to others, but they were my impressions. I blame the anti-cult and anti-scam heuristics identified in the comments by Jonathan Graehl and Pavitra.

Now that I've come here and seen all of you rational people into cryonics, I've looked at the references here and realised that my impressions were wrong. So cryonics is not terribly expensive and might well work; how interesting! And yet, I have no desire to sign up myself.

Why not? I believe that the reason is that, to spout a cliché, I've come to terms with death. There was a time when I found it very attractive to believe religious ideas promising immortality, but once I abandoned those as irrational, I faced the realisation that I was going to die permanently some day. That worried me for a while, but then I got used to it; I no longer desired to live forever. I didn't even desire to live longer than about a century.

And since I no longer desire to live so long, I have no desire to sign up for cryonics. If I hadn't been so ignorant about cryonics when I abandoned my religious hopes for immortality, then I might well have held onto that desire. So arguably, the only reason that I don't want to live into the 4th Millennium is that I was wrong about something in the past. Nevertheless, it's still true that I don't particularly want to live into the 4th Millennium. So I'm glad that cryonics is reasonable, and I'm glad that the people on this site who want it are signing up for it, but it's not something that interests me.

This must be an example of a much broader theme. One wants X but comes to the belief that X is impossible. Then one stops wanting X, which is probably a healthy response when X really is impossible. When it turns out that X is possible after all, one still does not want X.

Anyway, somebody who has gone through this process might see cyronics as threatening because it seems to attack their own rationality. It doesn't bother me, because I know that ultimate values don't have to be justified; I don't want to live forever, and you do, and that's fine on both ends. But for someone who wants to believe that their ultimate values are objectively correct, and perhaps also for someone who still wants deep down to live forever but has been suppressing this, learning that something is possible after all can be threatening.

Comment author: Vladimir_Nesov 04 August 2010 07:53:38AM 5 points [-]

See http://wiki.lesswrong.com/wiki/Shut_up_and_multiply

Remember that you can be horribly wrong about what you want.

Comment author: TobyBartels 04 August 2010 09:43:06PM 1 point [-]

There's no way to calculate ultimate values. It's not that I don't want to live forever because I think that this would have a negative effect. It's just not something that I want.

You could argue that living forever would further other of my values. In some ways, this is probably true. In the case of being restored from cryonics, however, I doubt it. I would have even less influence on that future world than I have on this, and none of the people that I care about are likely to be there.

I am rather equivocal about living forever. Maybe someday cryonics will be cheap and easy, something that everybody does (or something as cheap and easy as things that everybody does, even if people still don't do it, perhaps for irrational reasons). My best guess is that I would sign up for it in that case. So if somebody else wants to sign me up for cryonics, handle the paperwork and make the payments, then I don't mind, but I wouldn't bother for my own sake.

Comment author: Vladimir_Nesov 04 August 2010 11:32:55PM *  7 points [-]

There's no way to calculate ultimate values.

If there is fact of the matter of what you should do, if there are moral arguments to change your mind about what you believe you should do, then there is a good chance that your current beliefs about what you should do are wrong is some way. And given that chance, you must decide under uncertainty, do an expected utility calculation, taking into account what value a given action would have if you have so and so possible values. You are not allowed to ignore the value uncertainty. Maybe it's the main hypothesis that you don't want to live much longer than normal, but the other hypotheses cry for help, they are not absent, and with the moral strength behind their claims you can't pretend they are not there. Maybe they lose a decision, but they still contribute to its expected moral value, which is therefore not indifference.

Comment author: TobyBartels 05 August 2010 04:09:12AM 1 point [-]

If there is fact of the matter of what you should do

But of course there isn't.

What you've said is fine theoretically, but it's meaningless unless I have some ultimate values that drive everything else. (You saw some of them on the introductions thread.) I do not value long life for its own sake, and I do not see how being cryogenically revived will further any of the values that I actually hold. In particular, I don't see how it will enhance the freedom of other people, and I don't see how it's at all relevant to any of my personal goals, which will all be obsolete.

If you have a suggestion as to why it would be a good thing for me to sign up for cryonics, that maybe the future will be in dire need of people with my old-fashioned ideas and historical knowledge, or that you have fallen in love with over the Internet and will miss me when I'm gone, or anything more reasonable but which I can't think of myself, then please say it. Otherwise I have no reason to do it.

Comment author: Vladimir_Nesov 05 August 2010 09:14:14AM *  3 points [-]

If there is fact of the matter of what you should do

But of course there isn't.

That you shouldn't care about living for a very long time is exactly a claim about this fact. If there is no fact of the matter of what you should do, you can't claim that the thing you should do is to not care.

If you have a suggestion as to why it would be a good thing for me to sign up for cryonics

As often, I argue with a faulty argument, but not about its conclusion.

Comment author: TobyBartels 05 August 2010 01:26:42PM 3 points [-]

That you shouldn't care about living for a very long time is exactly a claim about this fact.

Where have I made that claim? It's enough that I don't care about living for a very long time.

Comment author: Vladimir_Nesov 05 August 2010 01:31:36PM *  0 points [-]

It's enough that I don't care about living for a very long time.

And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about. And if there is a fact, there is possibility of being wrong about it.

Unless by "not caring about X" you by definition mean that there are statements being pronounced like "I don't care about X", or certain chemicals being released in your brain, you'd have to settle for not having absolutely privileged knowledge about what you actually care about.

Comment author: randallsquared 09 August 2010 03:16:32AM 1 point [-]

As long as we can agree that whether someone cares about X is an empirically discoverable fact, then there seem to be two currently-possible methods of finding out what that is: introspection and viewing their actions ("revealed preference").

There is no amount of evidence about facts external to a person that could possibly bear on whether they care about X. You might change whether they care about X by presenting external input, but that's a rather different thing.

Comment author: TobyBartels 05 August 2010 06:32:41PM 0 points [-]

what I referred to as the fact of the matter about what you (should) care about

You mentioned the fact of the matter of what I should do. I would hold the fact of the matter of what I should care about in the same contempt. As for the fact of the matter of what I care about, you don't know what you're talking about.

The only reason why I'm replying at all is that this is a site dedicated to cognitive biases, and maybe you will cite an interesting post here about how I might be horribly wrong about what I care about. Of course I could be horribly wrong about my intermediate values, but the calculation is not coming out that way.

Comment author: NihilCredo 06 August 2010 12:26:08AM 1 point [-]

I think an issue here may be that your statement of caring or not caring about something doesn't carry much weight when you not only are personally unfamiliar with X, but so is everyone else who ever lived.

You can truthfully state that you aren't terribly attracted by what you imagine living a second life in Futurama is like; but your mental picture of it is likely to bear very very very little resemblance to what a potential second life will actually feel like. Since cryonics offers a small chance of giving you an actual future life, you should evaluate on the basis of that and therefore pay little attention to what your hopelessly flawed imagination suggests.

It's like stating that you care/don't care for a particular hallucinogen without having ever tried it, or any substance similar to it before, or having ever read reports by people who actually tried it. You don't have a sufficient basis to make your model of it, about which you state a claim of caring/not caring, at all meaningful.

Comment author: thomblake 05 August 2010 03:33:59PM 0 points [-]

It's enough that I don't care about living for a very long time.

And if I say that you do, what is the criterion for telling which statement is the correct one? That criterion is what I referred to as the fact of the matter about what you (should) care about.

This seems wrong to me. That Toby should X does not imply that Toby does X, so determining what Toby should want does not settle whether Toby in fact wants it.

you can't claim that the thing you should do is to not care.

Toby does not seem to be making that claim, though perhaps implicitly so. (Much like it could be argued that "X" implies "I believe that X", it could be argued that "I did X" implies "I should have done X". But that fails on common usage, where "I did X but I should not have done X" is ordinary.)