Roko comments on Abnormal Cryonics - Less Wrong

56 Post author: Will_Newsome 26 May 2010 07:43AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (365)

You are viewing a single comment's thread. Show more comments above.

Comment deleted 29 May 2010 03:30:12PM [-]
Comment author: Will_Newsome 30 May 2010 01:26:13AM *  0 points [-]

Well, that gets tricky, because I have weak subjective evidence that I can't share with anyone else, and really odd ideas about it, that makes me think that an FAI is the likely outcome. (Basically, I suspect something sorta kinda a little along the lines of me living in a fun theory universe. Or more precisely, I am a sub-computation of a longer computation that is optimized for fun, so that even though my life is sub-optimal at the moment I expect it to get a lot better in the future, and that the average of the whole computation's fun will turned out to be argmaxed. Any my life right now rocks pretty hard anyway. I suspect other people have weaker versions of this [with different evidence from mine] with correspondingly weaker probability estimates for this kind of thing happening.) So if we assume with p=1 that a positive singularity will occur for sake of ease, that leaves about 2% that cryonics will work (5% that an FAI raises the cryonic dead minus 3% that an FAI raises all the dead) if you die times the probability that you die before the singularity (about 15% for most people [but about 2% for me]) which leads to 0.3% as my figure for someone with a sense of identity far stronger than me, Kaj, and many others, who would adjust downward from there (an FAI can be expected to extrapolate our minds and discover it should use the resources on making 10 people with values similar to ourself instead, or something). If you say something like 5% positive singularity instead, then it comes out to 0.015%, or very roughly 1 in 7000 (although of course your decision theory should discount worlds in which you die no matter what anyway, so that the probability of actually living past the singularity shouldn't change your decision to sign up all that much). I suspect someone with different intuitions would give a very different answer, but it'll be hard to make headway in debate because it really is so non-technical. The reason I give extremely low probabilities for myself is due to considerations that apply to me only and that I'd rather not go into.

Comment author: Vladimir_Nesov 30 May 2010 02:01:58PM 0 points [-]

Hmm... Seems like crazy talk to me. It's your mind, tread softly.

Comment author: Will_Newsome 30 May 2010 08:50:15PM 0 points [-]

The ideas about fun theory are crazy talk indeed, but they're sort of tangential to my main points. I have much crazier ideas peppered throughout the comments of this post (very silly implications of decision theory in a level 4 multiverse that are almost assuredly wrong but interesting intuition pumps) and even crazier ideas in the notes I write to myself. Are you worried that this will lead to some sort of mental health danger, or what? I don't know how often high shock levels damage one's sanity to an appreciable degree.

Comment author: Vladimir_Nesov 01 June 2010 11:40:51AM 1 point [-]

I have much crazier ideas peppered throughout the comments of this post (very silly implications of decision theory in a level 4 multiverse that are almost assuredly wrong but interesting intuition pumps) and even crazier ideas in the notes I write to myself. Are you worried that this will lead to some sort of mental health danger, or what? I don't know how often high shock levels damage one's sanity to an appreciable degree.

It's not "shock levels" which are a problem, it's working in the "almost assuredly wrong" mode. If you yourself believe ideas you develop to be wrong, are they knowledge, are they progress? Do crackpots have "damaged sanity"?

It's usually better to develop ideas on as firm ground as possible, working towards the unknown from statements you can rely on. Even in this mode will you often fail, but you'd be able to make gradual progress that won't be illusory. Not all questions are ready to be answered (or even asked).

Comment deleted 30 May 2010 01:46:44PM *  [-]
Comment author: Will_Newsome 30 May 2010 08:41:11PM *  0 points [-]

For what it's worth the uncertain future application gives me 99% chance of a singularity before 2070 if I recall correctly. The mean of my distrubution is 2028.

I really wish more SIAI members talked to each other about this! Estimates vary wildly, and I'm never sure if people are giving estimates taking into account their decision theory or not (that is, thinking 'We couldn't prevent a negative singularity if it was to occur in the next 10 years, so let's discount those worlds and exclude them from our probability estimates'.) I'm also not sure if people are giving far-off estimates because they don't want to think about the implications otherwise, or because they tried to build an FAI and it didn't work, or because they want to signal sophistication and sophisticated people don't predict crazy things happening very soon, or because they are taking an outside view of the problem, or because they've read the recent publications at the AGI conferences and various journals, thought about advances that need to be made, estimated the rate of progress, and determined a date using the inside view (like Steve Rayhawk who gives a shorter time estimate than anyone else, or Shane Legg who I've heard also gives a short estimate but I am not sure about that, or Ben Goertzel who I am again not entirely sure about, or Juergen Schmidhuber who seems to be predicting it soonish, or Eliezer who used to have a soonish estimate with very wide tails but I have no idea what his thoughts are now). I've heard the guys at FHI also have distant estimates, and a lot of narrow AI people predict far-off AGI as well. Where are the 'singularity is far' people getting their predictions?

Comment deleted 31 May 2010 01:55:41PM [-]
Comment author: Will_Newsome 31 May 2010 10:46:32PM 0 points [-]

True. But the mean of my distribution is still 2028 regardless of the inaccuracy of UF.

Comment deleted 31 May 2010 11:20:39PM *  [-]
Comment author: Will_Newsome 31 May 2010 11:37:22PM 0 points [-]

Giving probabilities of 99% is a classic symptom of not having any model uncertainty.

If Nick and I write some more posts I think this would be the theme. Structural uncertainty is hard to think around.

Anyway, I got my singularity estimations by listening to lots of people working at SIAI and seeing whose points I found compelling. When I arrived at Benton I was thinking something like 2055. It's a little unsettling that the more arguments I hear from both sides the nearer in the future my predictions are. I think my estimates are probably too biased towards Steve Rayhawk's, but this is because everyone else's estimates seem to take the form of outside view considerations that I find weak.

Comment deleted 30 May 2010 01:44:18PM [-]
Comment author: Will_Newsome 30 May 2010 08:45:02PM *  0 points [-]

if I reflected sufficiently hard, I would place zero terminal value on my own life.

Not even close to zero, but less terminal value than you would assign to other things that an FAI could optimize for. I'm not sure how much extrapolated unity of mankind there would be on this regard. I suspect Eliezer or Anna would counter my 5% with a 95%, and I would Aumann to some extent, but I was giving my impression and not belief. (I think that this is better practice at the start of a 'debate': otherwise you might update on the wrong expected evidence. EDIT: To be more clear, I wouldn't want to update on Eliezer's evidence if it was some sort of generalization from fictional evidence from Brennan's world or something, but I would want to update if he had a strong argument that identity has proven to be extremely important to all of human affairs since the dawn of civilization, which is entirely plausible.)

Comment deleted 31 May 2010 01:41:43PM [-]
Comment author: Will_Newsome 31 May 2010 10:53:30PM 1 point [-]

I guess I'm saying the amount of atoms it takes to revive a cryo patient is vastly more wasteful than its weight in computronium. You're trading off one life for a huge amount of potential lives. A few people, like Alicorn if I understand her correctly, think that people who are already alive are worth a huge number of potential lives, but I don't quite understand that intuition. Is this a point of disagreement for us?

Comment deleted 31 May 2010 11:11:41PM [-]
Comment author: Will_Newsome 31 May 2010 11:30:12PM 0 points [-]

Gah, sorry, I keep leaving things out. I'm thinking about the actual physical finding out where cryo patients are, scanning their brains, repairing the damage, and then running them. Mike Blume had a good argument against this point: proportionally, the startup cost of scanning a brain is not much at all compared to the infinity of years of actually running the computation. This is where I should be doing the math... so I'm going to think about it more and try and figure things out. Another point is that an AGI could gain access to infinite computing power in finite time during which it could do everything, but I think I'm just confused about the nature of computations in a Tegmark multiverses here.

Comment deleted 31 May 2010 11:47:41PM *  [-]
Comment author: Will_Newsome 01 June 2010 12:06:48AM 0 points [-]

Yes, but this makes people flustered so I prefer not to bring it up as a possibility. I'm not sure if it was Bostrom or just generic SIAI thinking where I heard that an FAI might deconstruct us in order to go out into the universe, solve the problem of astronomical waste, and then run computations of us (or in this case generic transhumans) far in the future.