PhilGoetz comments on Abnormal Cryonics - Less Wrong

56 Post author: Will_Newsome 26 May 2010 07:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (365)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 26 May 2010 08:06:40PM 1 point [-]

I don't think so - the points in the post stand regardless of the probability Will assigns. Bringing up other beliefs of Will is an ad hominem argument. Ad hominem is a pretty good argument in the absence of other evidence, but we don't need to go there today.

Comment deleted 26 May 2010 11:54:02PM *  [-]
Comment author: Will_Newsome 27 May 2010 12:12:41AM *  3 points [-]

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Once again, my probability estimate was for myself. There are important subjective considerations, such as age and definition of identity, and important sub-disagreements to be navigated, such as AI takeoff speed or likelihood of Friendliness. If I was 65 years old, and not 18 like I am, and cared a lot about a very specific me living far into the future, which I don't, and believed that a singularity was in the distant future, instead of the near-mid future as I actually believe, then signing up for cryonics would look a lot more appealing, and might be the obviously rational decision to make.

Comment deleted 27 May 2010 10:53:27AM *  [-]
Comment author: Will_Newsome 27 May 2010 09:54:22PM 1 point [-]

What?! Roko, did you seriously not see the two points I had directly after the one about age? Especially the second one?! How is my lack of a strong preference to stay alive into the distant future a false preference? Because it's not a false belief.

Comment deleted 27 May 2010 10:04:30PM *  [-]
Comment author: Will_Newsome 27 May 2010 10:11:31PM 0 points [-]

Okay. Like I said, the one in a million thing is for myself. I think that most people, upon reflection (but not so much reflection as something like CEV requires), really would like to live far into the future, and thus should have probabilities much higher than 1 in a million.

Comment deleted 27 May 2010 10:24:15PM *  [-]
Comment author: Will_Newsome 27 May 2010 10:33:27PM 0 points [-]

We were talking about the probability of getting 'saved', and 'saved' to me requires that the future is suited such that I will upon reflection be thankful that I was revived instead of those resources being used for something else I would have liked to happen. In the vast majority of post-singularity worlds I do not think this will be the case. In fact, in the vast majority of post-singularity worlds, I think cryonics becomes plain irrelevant. And hence my sorta-extreme views on the subject.

I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote.

Comment deleted 27 May 2010 10:37:09PM *  [-]
Comment author: Vladimir_Nesov 27 May 2010 09:40:50AM *  0 points [-]

There are important subjective considerations, such as age and definition of identity,

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

You are not really trying to figure out "How likely is it to survive as a result of signing up?", that's just an instrumental question that is supposed to be helpful, you are trying to figure out which decision you should make.

Comment author: Will_Newsome 27 May 2010 10:02:16PM *  0 points [-]

Nope, "definition of identity" doesn't influence what actually happens as a result of your decision, and thus doesn't influence how good what happens will be.

Simply wrong. I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you?

EDIT: Ah, I think I know what happened: Roko and I were talking about the probability of me being 'saved' by cryonics in the thread he linked to, but perhaps you missed that. Let me copy/paste something I said from this thread: "I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote." I don't think I emphasized this enough. My apologies. (I feel silly, because without this distinction you've probably been thinking I've been committing the mind projection fallacy this whole time, and I didn't notice.)

You are not really trying to figure out "How likely is it to survive as a result of signing up?", that's just an instrumental question that is supposed to be helpful, you are trying to figure out which decision you should make.

Not sure I'm parsing this right. Yes, I am determining what decision I should make. The instrumental question is a part of that, but it is not the only consideration.

Comment author: Vladimir_Nesov 27 May 2010 11:17:28PM *  0 points [-]

I can assign positive utility to whatever interpretation of an event I please. If the map changes, the utility changes, even if the territory stays the same. Preferences are not in the territory. Did I misunderstand you?

You haven't misunderstood me, but you need to pay attention to this question, because it's more or less a consensus on Less Wrong that your position expressed in the above quote is wrong. You should maybe ask around for clarification of this point, if you don't get a change of mind from discussion with me.

You may try the metaethics sequence, and also/in particular these posts:

That preference is computed in the mind doesn't make it any less of territory than anything else. This is just a piece of territory that happens to be currently located in human minds. (Well, not quite, but to a first approximation.)

Your map may easily change even if the territory stays the same. This changes your belief, but this change doesn't influence what's true about the territory. Likewise, your estimate of how good situation X is may change, once you process new arguments or change your understanding of the situation, for example by observing new data, but that change of your belief doesn't influence how good X actually is. Morality is not a matter of interpretation.

Comment author: Will_Newsome 27 May 2010 11:41:14PM 0 points [-]

Before I spend a lot of effort trying to figure out where I went wrong (which I'm completely willing to do, because I read all of those posts and the metaethics sequence and figured I understood them), can you confirm that you read my EDIT above, and that the misunderstanding addressed there does not encompass the problem?

Comment author: Vladimir_Nesov 27 May 2010 11:52:56PM *  0 points [-]

Now I have read the edit, but it doesn't seem to address the problem. Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection". If you expect to believe something, you should just believe it right away. See Conservation of expected evidence. But then, "probability this decision is right" is not something you can use for making the decision, not directly.

Comment author: Nick_Tarleton 28 May 2010 04:36:28AM *  0 points [-]

Also, I don't see what you can use the concepts you bring up for, like "probability that I will get enough utility to justify cryonics upon reflection".

This might not be the most useful concept, true, but the issue at hand is the meta-level one of people's possible overconfidence about it.

Comment author: Vladimir_Nesov 28 May 2010 11:51:01AM *  2 points [-]

"Probability of signing up being good", especially obfuscated with "justified upon infinite reflection", being subtly similar to "probability of the decision to sign up being correct", is too much of a ruse to use without very careful elaboration. A decision can be absolutely, 99.999999% correct, while the probability of it being good remains at 1%, both known to the decider.

Comment author: Will_Newsome 28 May 2010 12:11:14AM *  0 points [-]

So you read footnote 2 of the post and do not think it is a relevant and necessary distinction? And you read Steven's comment in the other thread where it seems he dissolved our disagreement and determined we were talking about different things?

I know about the conservation of expected evidence. I understand and have demonstrated understanding of the content in the various links you've given me. I really doubt I've been making the obvious errors you accuse me of for the many months I've been conversing with people at SIAI (and at Less Wrong meetups and at the decision theory workshop) without anyone noticing.

Here's a basic summary of what you seem to think I'm confused about: There is a broad concept of identity in my head. Given this concept of identity I do not want to sign up for cryonics. If this concept of identity changed such that the set of computations I identified with became smaller, then cryonics would become more appealing. I am talking about the probability of expected utility, not the probability of an event. The first is in the map (even if the map is in the territory, which I realize, of course), the second is in the territory.

EDIT: I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

Comment author: Vladimir_Nesov 28 May 2010 12:25:26AM *  1 point [-]

I am treating considerations about identity as a preference: whether or not I should identify with any set of computations is my choice, but subject to change. I think that might be where we disagree: you think everybody will eventually agree what identity is, and that it will be considered a fact about which we can assign different probabilities, but not something subjectively determined.

That preference is yours and yours alone, without any community to share it, doesn't make its content any less of a fact than if you'd had a whole humanity of identical people to back it up. (This identity/probability discussion is tangential to a more focused question of correctness of choice.)

Comment author: Vladimir_Nesov 28 May 2010 12:20:19AM *  0 points [-]

The easiest step is for you to look over the last two paragraphs of this comment and see if you agree with that. (Agree/disagree in what sense, if you suspect essential interpretational ambiguity.)

I don't know why you brought up the concept of identity (or indeed cryonics) in the above, it wasn't part of this particular discussion.

Comment author: Will_Newsome 27 May 2010 01:04:38AM 2 points [-]

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Similar to what I think JoshuaZ was getting at, signing up for cryonics is a decently cheap signal of your rationality and willingness to take weird ideas seriously, and it's especially cheap for young people like me who might never take advantage of the 'real' use of cryonics.

Comment author: JoshuaZ 27 May 2010 12:05:38AM 1 point [-]

Really? Even if you buy into Will's estimate, there are at least three arguments that are not weak:

1) The expected utility argument (I presented above arguments for why this fails, but it isn't completely clear that those rebuttals are valid)

2) One might think that buying into cryonics helps force people (including oneself) to think about the future in a way that produces positive utility.

3) One gets a positive utility from the hope that one might survive using cryonics.

Note that all three of these are fairly standard pro-cryonics arguments that all are valid even with the low probability estimate made by Will.

Comment deleted 27 May 2010 10:55:55AM *  [-]
Comment author: JoshuaZ 27 May 2010 02:17:47PM 0 points [-]

none of those hold for p = 1 in a million.

That really depends a lot on the expected utility. Moreover, argument 2 above (getting people to think about long-term prospects) has little connection to the value of p.

Comment deleted 27 May 2010 03:03:17PM [-]
Comment author: JoshuaZ 27 May 2010 03:09:04PM 0 points [-]

Even a small chance that you will be there helps put people in the mind-set to think long-term.

Comment author: timtyler 30 May 2010 11:54:08AM *  0 points [-]

Re: "whether it is plausible to rationally reject it"

Of course people can plausibly rationally reject cryonics!

Surely nobody has been silly enough to argue that cryonics makes good financial sense - irrespective of your goals and circumstances.

Comment deleted 30 May 2010 02:16:27PM [-]
Comment author: timtyler 30 May 2010 02:58:55PM *  -1 points [-]

In biology, individual self-preservation is a emergent subsidiary goal - what is really important is genetic self-preservation.

Organisms face a constant trade-off - whether to use resources now to reproduce, or whether to invest them in self-perpetuation - in the hope of finding a better chance to reproduce in the future.

Calorie restriction and cryonics are examples of this second option - sacrificing current potential for the sake of possible future gains.

Comment author: Morendil 30 May 2010 03:11:09PM 3 points [-]

Organisms face a constant trade-off - whether to use resources now to reproduce, or whether to invest them in self-perpetuation - in the hope of finding a better chance to reproduce in the future.

Evolution faces this trade-off. Individual organisms are just stuck with trade-offs already made, and (if they happen to be endowed with explicit motivations) may be motivated by something quite other than "a better chance to reproduce in the future".

Comment author: timtyler 30 May 2010 03:24:20PM *  -1 points [-]

Organisms choose - e.g. they choose whether to do calorie restriction - which diverts resources from reproductive programs to maintenance ones. They choose whether to divert resources in the direction of cryonics companies as well.

Comment author: Morendil 30 May 2010 03:39:33PM 0 points [-]

I'm not disputing that organisms choose. I'm disputing that organisms necessarily have reproductive programs. (You can only face a trade-off between two goals if you value both goals to start with.) Some organisms may value self-preservation, and value reproduction not at all (or only insofar as they view it as a form of self-preservation).

Comment author: timtyler 30 May 2010 03:42:46PM *  0 points [-]

Not all organisms choose - for example, some have strategies hard-wired into them - and others are broken.