Roko comments on Abnormal Cryonics - Less Wrong

56 Post author: Will_Newsome 26 May 2010 07:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (365)

You are viewing a single comment's thread. Show more comments above.

Comment author: Will_Newsome 27 May 2010 12:12:41AM *  3 points [-]

If Will's probability is correct, then I fail to see how his post makes sense: it wouldn't make sense for anyone to pay for cryo.

Once again, my probability estimate was for myself. There are important subjective considerations, such as age and definition of identity, and important sub-disagreements to be navigated, such as AI takeoff speed or likelihood of Friendliness. If I was 65 years old, and not 18 like I am, and cared a lot about a very specific me living far into the future, which I don't, and believed that a singularity was in the distant future, instead of the near-mid future as I actually believe, then signing up for cryonics would look a lot more appealing, and might be the obviously rational decision to make.

Comment deleted 27 May 2010 10:53:27AM *  [-]
Comment author: Will_Newsome 27 May 2010 09:54:22PM 1 point [-]

What?! Roko, did you seriously not see the two points I had directly after the one about age? Especially the second one?! How is my lack of a strong preference to stay alive into the distant future a false preference? Because it's not a false belief.

Comment deleted 27 May 2010 10:04:30PM *  [-]
Comment author: Will_Newsome 27 May 2010 10:11:31PM 0 points [-]

Okay. Like I said, the one in a million thing is for myself. I think that most people, upon reflection (but not so much reflection as something like CEV requires), really would like to live far into the future, and thus should have probabilities much higher than 1 in a million.

Comment deleted 27 May 2010 10:24:15PM *  [-]
Comment author: Will_Newsome 27 May 2010 10:33:27PM 0 points [-]

We were talking about the probability of getting 'saved', and 'saved' to me requires that the future is suited such that I will upon reflection be thankful that I was revived instead of those resources being used for something else I would have liked to happen. In the vast majority of post-singularity worlds I do not think this will be the case. In fact, in the vast majority of post-singularity worlds, I think cryonics becomes plain irrelevant. And hence my sorta-extreme views on the subject.

I tried to make it clear in my post and when talking to both you and Vladimir Nesov that I prefer talking about 'probability that I will get enough utility to justify cryonics upon reflection' instead of 'probability that cryonics will result in revival, independent of whether or not that will be considered a good thing upon reflection'. That's why I put in the abnormally important footnote.

Comment deleted 27 May 2010 10:37:09PM *  [-]
Comment author: Will_Newsome 27 May 2010 11:27:40PM *  0 points [-]

Cool, I'm glad to be talking about the same thing now! (I guess any sort of misunderstanding/argument causes me a decent amount of cognitive burden that I don't realize was there until after it is removed. Maybe a fear of missing an important point that I will be embarrassed about having ignored upon reflection. I wonder if Steve Rayhawk experiences similar feelings on a normal basis?)

Well here's a really simple, mostly qualitative analysis, with the hope that "Will" and "Roko" should be totally interchangeable.

Option 1: Will signs up for cryonics.

  • uFAI is developed before Will is cyopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • uFAI is developed after Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • FAI is developed before Will is cryopreserved. Signing up for cryonics never gets a chance to work for Will specifically.

  • FAI is developed after Will is cryopreserved. Cryonics might work, depending on the implementation and results of things like CEV. This is a huge question mark for me. Something close to 50% is probably appropriate, but at times I have been known to say something closer to 5%, based on considerations like 'An FAI is not going to waste resources reviving you: rather, it will spend resources on fulfilling what it expects your preferences probably were. If your preferences mandate you being alive, then it will do so, but I suspect that most humans upon much reflection and moral evolution won't care as much about their specific existence.' Anna Salamon and I think Eliezer suspect that personal identity is closer to human-ness than e.g. Steve Rayhawk and I do, for what it's worth.

  • An existential risk occurs before Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

  • An existential risk occurs after Will is cryopreserved. Signing up for cryonics doesn't work, but this possibility has no significantness in our decision theory anyway.

Option 2: Will does not sign up for cryonics.

  • uFAI is developed before Will dies. This situation is irrelevant to our decision theory.

  • uFAI is developed after Will dies. This situation is irrelevant to our decision theory.

  • FAI is developed before Will dies. This situation is irrelevant to our decision theory.

  • FAI is developed after Will dies. Because Will was not cryopreserved the FAI does not revive him in the typical sense. However, perhaps it can faithfully restore Will's brain-state from recordings of Will in the minds of humanity anyway, if that's what humanity would want. Alternatively Will is revived in ancestor simulations done by the FAI or any other FAI that is curious about humanity's history around the time right before its singularity. Measure is really important here, so I'm confused. I suspect less but not orders of magnitude less than the 50% figure above? This is an important point.

  • An existential risk occurs and Will dies. This possibility has no significantness in our decision theory anyway.

  • An existential risk occurs and Will dies. This possibility has no significantness in our decision theory anyway.

Basically, the point is that the most important factor by far is what an FAI does after going FOOM, and we don't really know what's going to happen there. So cryonics becomes a matter of preference more than a matter of probability. But if you're thinking about worlds that our decision theory discounts, e.g. where a uFAI is developed or rogue MNT is developed, then the probability of being revived drops a lot.

Comment deleted 29 May 2010 03:30:12PM [-]
Comment author: jimrandomh 27 May 2010 11:40:15PM *  1 point [-]

However, perhaps it can faithfully restore Will's brain-state from recordings of Will in the minds of humanity anyway, if that's what humanity would want. Alternatively Will is revived in ancestor simulations done by the FAI or any other FAI that is curious about humanity's history around the time right before its singularity.

I am reasonably confident that no such process can produce an entity that I would identify as myself. Being reconstructed from other peoples' memories means losing the memories of all inner thoughts, all times spent alone, and all times spent with people who have died or forgotten the occasion. That's too much lost for any sort of continuity of consciousness.