James_Miller comments on Why I haven't signed up for cryonics - Less Wrong

29 Post author: Swimmer963 12 January 2014 05:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (249)

You are viewing a single comment's thread. Show more comments above.

Comment author: James_Miller 12 January 2014 05:26:44PM 1 point [-]

Because there is a limited amount of free energy in the universe, unless the AI's goals incorporated your utility function it wouldn't bring you back and indeed would use the atoms in your body to further whatever goal it had. With very high probability, we only get an UFAI that would (1) bring you back and (2) make our lives have less value than they do today if evil humans deliberately put a huge amount of effort into making their AI unfriendly, programming in torturing humans as a terminal value.

Comment author: ialdabaoth 12 January 2014 06:13:51PM *  0 points [-]

Alternate scenario 1: AI wants to find out something that only human beings from a particular era would know, brings them back as simulations as a side-effect of the process it uses to extract their memories, and then doesn't particularly care about giving them a pleasant environment to exist in.

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Comment author: [deleted] 14 January 2014 08:48:03PM 0 points [-]

Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.

Good news: this one's remarkably unlikely, since almost all existing Friendly AI approaches are indirect ("look at some samples of real humans and optimize for the output of some formally-specified epistemic procedure for determining their values") rather than direct ("choirs of angels sing to the Throne of God").

Comment author: TheOtherDave 14 January 2014 08:53:10PM 0 points [-]

Not sure how that helps. Would you prefer scenario 2b, with "[..] because its formally-specified epistemic procedure for determining the values of its samples of real humans results in a concept of value-maximization that has a hideously unfortunate implication."?

Comment author: [deleted] 14 January 2014 09:06:42PM *  1 point [-]

You're saying that enacting the endorsed values of real people taken at reflective equilibrium has an unfortunate implication? To whom? Surely not to the people whose values you're enacting. Which does leave population-ethics a biiiiig open question for FAI development, but it at least means the people whose values you feed to the Seed AI get what they want.

Comment author: TheOtherDave 14 January 2014 09:15:08PM 1 point [-]

No, I'm saying that (in scenario 2b) enacting the result of a formally-specified epistemic procedure has an unfortunate implication. Unfortunate to everyone, including the people who were used as the sample against which that procedure ran.

Comment author: [deleted] 14 January 2014 10:24:14PM 0 points [-]

Why? The whole point of a formally-specified epistemic procedure is that, with respect to the people taken as samples, it is right by definition.

Comment author: TheOtherDave 14 January 2014 10:41:25PM 2 points [-]

Wonderful. Then the unfortunate implication will be right, by definition.

So what?

Comment author: [deleted] 14 January 2014 11:00:53PM 3 points [-]

I'm not sure what the communication failure here is. The whole point is to construct algorithms that extrapolate the value-set of the input people. By doing so, you thus extrapolate a moral code that the input people can definitely endorse, hence the phrase "right by definition". So where is the unfortunate implication coming from?

Comment author: VAuroch 15 January 2014 01:14:45AM 3 points [-]

A third-party guess: It's coming from a flaw in the formal specification of the epistemic procedure. That it is formally specified is not a guarantee that it is the specification we would want. It could rest on a faulty assumption, or take a step that appears justified but in actuality is slightly wrong.

Basically, formal specification is a good idea, but not a get-out-of-trouble-free card.

Comment author: TheOtherDave 15 January 2014 01:14:08AM *  0 points [-]

I'm not sure either. Let me back up a little... from my perspective, the exchange looks something like this:

ialdabaoth: what if failed FAI is incorrectly implemented and fucks things up?
eli_sennesh: that won't happen, because the way we produce FAI will involve an algorithm that looks at human brains and reverse-engineers their values, which then get implemented.
theOtherDave: just because the target specification is being produced by an algorithm doesn't mean its results won't fuck things up
e_s: yes it does, because the algorithm is a formally-specified epistemic procedure, which means its results are right by definition.
tOD: wtf?

So perhaps the problem is that I simply don't understand why it is that a formally-specified epistemic procedure running on my brain to extract the target specification for a powerful optimization process should be guaranteed not to fuck things up.

Comment author: James_Miller 12 January 2014 07:06:52PM 0 points [-]

For scenario 1, it would almost certainly require less free energy just to get the information directly from the brain without ever bringing the person to consciousness.

For scenario 2, you would seriously consider suicide if you fear that a failed friendly AI might soon be developed. Indeed, since there is a chance you will become incapacitated (say by falling into a coma) you might want to destroy your brain long before such an AI could arise.

Comment author: Decius 12 January 2014 06:01:47PM 0 points [-]

It's also possible that the AI finds instrumental utility in having humans around, and that reviving cryonics patients is cheaper than using their Von Newman factories.

Comment author: James_Miller 12 January 2014 06:09:49PM 3 points [-]

I disagree. Humans almost certainly do not efficiently use free energy compared to the types of production units an ultra-AI could make.

Comment author: Decius 13 January 2014 12:47:06AM 0 points [-]

How expensive is it to make a production unit with the versatility and efficiency of a human? How much of that energy would simply be wasted anyway? Likely, no, but possible.

Rolling all of that into 'cryonics fails' has little effect on the expected value in any case.