James_Miller comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread.

Comment author: James_Miller 12 January 2014 06:41:11AM 1 point [-]

The possibility of a friendly ultra-AI greatly raises the expected value of cryonics. Such an AI would likely create a utopia that you would very much want to live in. Also, this possibility reduces the time interval before you would be brought back, and so makes it less likely that your brain would be destroyed before cryonics revival becomes possible. If you believe in the likelihood of a singularity by, say, 2100 then you can't trust calculations of the success of cryonics that don't factor in the singularity.

Comment author: Benito 12 January 2014 08:44:04AM 2 points [-]

Which causes me to think if another argument: if you attach a high probability to an Ultra-AI which doesn't quite have a perfectly aligned utility function, do you want to be brought back into a world which has or could have an UFAI?

Comment author: James_Miller 12 January 2014 05:26:44PM 1 point [-]

Because there is a limited amount of free energy in the universe, unless the AI's goals incorporated your utility function it wouldn't bring you back and indeed would use the atoms in your body to further whatever goal it had. With very high probability, we only get an UFAI that would (1) bring you back and (2) make our lives have less value than they do today if evil humans deliberately put a huge amount of effort into making their AI unfriendly, programming in torturing humans as a terminal value.

Comment author: ialdabaoth 12 January 2014 06:13:51PM *  0 points [-]

Alternate scenario 1: AI wants to find out something that only human beings from a particular era would know, brings them back as simulations as a side-effect of the process it uses to extract their memories, and then doesn't particularly care about giving them a pleasant environment to exist in.

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Comment author: [deleted] 14 January 2014 08:48:03PM 0 points [-]

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Good news: this one's remarkably unlikely, since almost all existing Friendly AI approaches are indirect ("look at some samples of real humans and optimize for the output of some formally-specified epistemic procedure for determining their values") rather than direct ("choirs of angels sing to the Throne of God").

Comment author: TheOtherDave 14 January 2014 08:53:10PM 0 points [-]

Not sure how that helps. Would you prefer scenario 2b, with "[..] because its formally-specified epistemic procedure for determining the values of its samples of real humans results in a concept of value-maximization that has a hideously unfortunate implication."?

Comment author: [deleted] 14 January 2014 09:06:42PM *  1 point [-]

You're saying that enacting the endorsed values of real people taken at reflective equilibrium has an unfortunate implication? To whom? Surely not to the people whose values you're enacting. Which does leave population-ethics a biiiiig open question for FAI development, but it at least means the people whose values you feed to the Seed AI get what they want.

Comment author: TheOtherDave 14 January 2014 09:15:08PM 1 point [-]

No, I'm saying that (in scenario 2b) enacting the result of a formally-specified epistemic procedure has an unfortunate implication. Unfortunate to everyone, including the people who were used as the sample against which that procedure ran.

Comment author: [deleted] 14 January 2014 10:24:14PM 0 points [-]

Why? The whole point of a formally-specified epistemic procedure is that, with respect to the people taken as samples, it is right by definition.

Comment author: TheOtherDave 14 January 2014 10:41:25PM 2 points [-]

Wonderful. Then the unfortunate implication will be right, by definition.

So what?

Comment author: James_Miller 12 January 2014 07:06:52PM 0 points [-]

For scenario 1, it would almost certainly require less free energy just to get the information directly from the brain without ever bringing the person to consciousness.

For scenario 2, you would seriously consider suicide if you fear that a failed friendly AI might soon be developed. Indeed, since there is a chance you will become incapacitated (say by falling into a coma) you might want to destroy your brain long before such an AI could arise.

Comment author: Decius 12 January 2014 06:01:47PM 0 points [-]

It's also possible that the AI finds instrumental utility in having humans around, and that reviving cryonics patients is cheaper than using their Von Newman factories.

Comment author: James_Miller 12 January 2014 06:09:49PM 3 points [-]

I disagree. Humans almost certainly do not efficiently use free energy compared to the types of production units an ultra-AI could make.

Comment author: Decius 13 January 2014 12:47:06AM 0 points [-]

How expensive is it to make a production unit with the versatility and efficiency of a human? How much of that energy would simply be wasted anyway? Likely, no, but possible.

Rolling all of that into 'cryonics fails' has little effect on the expected value in any case.

Comment author: [deleted] 14 January 2014 08:46:15PM 0 points [-]

There's really not that much margin of error in Super Tengen Toppa AI design. The more powerful it is, the less margin for error.

It's not like you'd be brought back by a near-FAI that otherwise cares about human values but inexplicably thinks chocolate is horrible and eliminates every sign of it.

Comment author: V_V 13 January 2014 05:51:51PM *  0 points [-]

I don't think it would make much difference.

Consider my comment in Hallquist's thread:

AI singularity won't affects points 1 and 2: If information about your personality has not been preserved, there is nothing an AI can do to revive you.

It might affect points 3 and 4, but to a limited extent: an AI might be better than vanilla humans at doing research, but it would not be able to develop technologies which are impossible or intrinsically impractical for physical reasons. A truly benevolent AI might be more motivated to revive cryopatients than regular people with selfish desires, but it would still have to allocate its resources economically, and cryopatient revival might not be the best use of them.

Points 5 and 6: clearly the sooner the super-duper AI appears and develops revival tech, the higher the probability that your cryoremains are still around, but super AI appearing early and developing revival tech soon is less probable than it appearing late and/or taking a long time to develop revival tech, hence I would think that the two effects roughly cancel out. Also, as other people have noted, super AI appearing and giving you radical life extension within your lifetime would make cryonics a waste of money.

More generally, I think that AI singularity is itself a conjunctive event, with the more extreme and earlier scenarios being less probable than the less extreme and later ones. Therefore I don't think that taking into accounts AIs should significantly affect any estimation of cryonics success.

Comment author: James_Miller 13 January 2014 06:01:28PM 0 points [-]

I think that AI singularity is itself a conjunctive event,

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others. For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Comment author: V_V 13 January 2014 06:34:06PM 2 points [-]

The core thesis of my book Singularity Rising is (basically) that this isn't true, for the singularity at least, because there are many paths to a singularity and making progress along any one of them will help advance the others.

Well, I haven't read your book, hence I can't exclude that you might have made some good arguments I'm not aware of, but given the publicly available arguments I know, I don't think this is true.

For example, it seems highly likely that (conditional on our high tech civilization continuing) within 40 years genetic engineering will have created much smarter humans than have ever existed and these people will excel at computer programming compared to non-augmented humans.

Is it?

There are some neurological arguments that the human brain is near the maximum intelligence limit for a biological brain.
We are probably not going to breed people with IQ >200, perhaps we might breed people with IQ 140-160, but will there be tradeoffs that make it problematic to do it at large?
Will there be a demand for such humans?
Will they devote their efforts to AI research, or will their comparative advantage drive them to something else?
And how good will they be at developing super AI? As technology becomes more mature, making progress becomes more difficult because the low-hanging fruits have already have been picked, and intelligence itself might have diminishing returns (at very least, I would be surprised to observe an inverse linear correlation between average AI researcher IQ and time to AI).
And, of course, if singularity-inducing AI is impossible/impractical, the point is moot: these genetically enhanced Einstens will not develop it.

In general, with enough imagination you can envision many highly conjunctive ad hoc scenarios and put them into a disjunction, but I find this type of thinking highly suspicious, because you could use it to justify pretty much everything you wanted to believe.
I think it's better to recognize that we don't have any crystal ball to predict the future, and betting on extreme scenarios is probably not going to be a good deal.