Benito comments on Why I haven't signed up for cryonics - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (249)
Which causes me to think if another argument: if you attach a high probability to an Ultra-AI which doesn't quite have a perfectly aligned utility function, do you want to be brought back into a world which has or could have an UFAI?
Because there is a limited amount of free energy in the universe, unless the AI's goals incorporated your utility function it wouldn't bring you back and indeed would use the atoms in your body to further whatever goal it had. With very high probability, we only get an UFAI that would (1) bring you back and (2) make our lives have less value than they do today if evil humans deliberately put a huge amount of effort into making their AI unfriendly, programming in torturing humans as a terminal value.
Alternate scenario 1: AI wants to find out something that only human beings from a particular era would know, brings them back as simulations as a side-effect of the process it uses to extract their memories, and then doesn't particularly care about giving them a pleasant environment to exist in.
Alternate scenario 2: failed Friendly AI brings people back and tortures them because some human programmed it with a concept of "heaven" that has a hideously unfortunate implication.
Good news: this one's remarkably unlikely, since almost all existing Friendly AI approaches are indirect ("look at some samples of real humans and optimize for the output of some formally-specified epistemic procedure for determining their values") rather than direct ("choirs of angels sing to the Throne of God").
Not sure how that helps. Would you prefer scenario 2b, with "[..] because its formally-specified epistemic procedure for determining the values of its samples of real humans results in a concept of value-maximization that has a hideously unfortunate implication."?
You're saying that enacting the endorsed values of real people taken at reflective equilibrium has an unfortunate implication? To whom? Surely not to the people whose values you're enacting. Which does leave population-ethics a biiiiig open question for FAI development, but it at least means the people whose values you feed to the Seed AI get what they want.
No, I'm saying that (in scenario 2b) enacting the result of a formally-specified epistemic procedure has an unfortunate implication. Unfortunate to everyone, including the people who were used as the sample against which that procedure ran.
Why? The whole point of a formally-specified epistemic procedure is that, with respect to the people taken as samples, it is right by definition.
Wonderful. Then the unfortunate implication will be right, by definition.
So what?
I'm not sure what the communication failure here is. The whole point is to construct algorithms that extrapolate the value-set of the input people. By doing so, you thus extrapolate a moral code that the input people can definitely endorse, hence the phrase "right by definition". So where is the unfortunate implication coming from?
For scenario 1, it would almost certainly require less free energy just to get the information directly from the brain without ever bringing the person to consciousness.
For scenario 2, you would seriously consider suicide if you fear that a failed friendly AI might soon be developed. Indeed, since there is a chance you will become incapacitated (say by falling into a coma) you might want to destroy your brain long before such an AI could arise.
It's also possible that the AI finds instrumental utility in having humans around, and that reviving cryonics patients is cheaper than using their Von Newman factories.
I disagree. Humans almost certainly do not efficiently use free energy compared to the types of production units an ultra-AI could make.
How expensive is it to make a production unit with the versatility and efficiency of a human? How much of that energy would simply be wasted anyway? Likely, no, but possible.
Rolling all of that into 'cryonics fails' has little effect on the expected value in any case.
There's really not that much margin of error in Super Tengen Toppa AI design. The more powerful it is, the less margin for error.
It's not like you'd be brought back by a near-FAI that otherwise cares about human values but inexplicably thinks chocolate is horrible and eliminates every sign of it.