ata comments on Open Thread: May 2010 - Less Wrong

3 Post author: Jack 01 May 2010 05:29AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (543)

You are viewing a single comment's thread. Show more comments above.

Comment author: eugman 03 May 2010 02:59:30AM 2 points [-]

Has anyone read The Integral Trees by Larry Niven? Something I always wonder about people supporting cryonics is why do they assume that the future will be a good place to live in? Why do they assume they will have any rights? Or do they figure that if they are revived, FAI has most likely come to pass?

Comment author: ata 03 May 2010 03:17:42AM 1 point [-]

Or do they figure that if they are revived, FAI has most likely come to pass?

Can't speak for any other cryonics advocates, but I find that to be likely. I see AI either destroying or saving the world once it's invented, if we haven't destroyed ourselves some other way first, and one of those could easily happen before the world has a chance to turn dystopian. But in any case, if you wake up and find yourself in a world that you couldn't possibly bear to live in, you can just kill yourself and be no worse off than if you hadn't tried cryonics in the first place.

Comment author: humpolec 03 May 2010 12:35:30PM 0 points [-]

Unless it's unFriendly AI that revives you and tortures you forever.

Comment author: ata 03 May 2010 07:18:46PM 3 points [-]

"unFriendly" doesn't mean "evil", just "not explicitly Friendly". Assuming you already have an AI capable of recursive self-improvement, it's easy to give it a goal system that will result in the world being destroyed (not because it hates us, but because it can think of better things to do with all this matter), but creating one that's actually evil or that hates humans (or has some other reason that torturing us would make sense in its goal system) would probably be nearly as hard as the problem of Friendliness itself, as gregconen pointed out.

Comment author: gregconen 03 May 2010 02:10:05PM 6 points [-]

Strongly unFriendly AI (the kind that tortures you eternally, rather than kills you and uses your matter to make paperclips) would be about as difficult to create as Friendly AI. And since few people would try to create one, I don't think it's a likely future.

Comment author: NancyLebovitz 03 May 2010 12:49:11PM 1 point [-]

Actually, it's quite possible to deny physical means of suicide to prisoners, and sufficiently good longevity tech could make torture for a very long time possible.

I think something like that (say, for actions which are not currently considered to be crimes) to be possible, considering the observable cruelty of some fraction of the human race, but not very likely-- on the other hand, I don't know how to begin to quantify how unlikely it is.