Mitchell_Porter comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: Mitchell_Porter 20 January 2010 05:31:37AM 0 points [-]

chances are, if you are woken up, it'll be in a world with FAI. If things go really bad, you'd probably never find out...

The possibility of being woken up by an UFAI might be regarded as a good reason to avoid cryonics.

Comment author: MichaelGR 20 January 2010 05:53:14AM *  6 points [-]

From what I know, the danger of UFAI isn't that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn't care about us and would want to use resources to achieve goals other than what humans would want ("all that energy and those atoms, I need them to make more computronium, sorry").

I suppose it's possible to invent many scenarios where such an evil AI would be possible, but it seems unlikely enough based on the information that I have now that I wouldn't gamble a chance at life (versus a certain death) based on this sci-fi plot.

But if you are scared of UFAI, you can do something now by supporting FAI research. It might actually be more likely for us to face a UFAI within our current lives than after being woken up from cryonic preservation (since just the fact of being woken up is probably a positive sign of FAI).

Comment author: wedrifid 20 January 2010 06:14:37AM 0 points [-]

From what I know, the danger of UFAI isn't that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn't care about us and would want to use resources to achieve goals other than what humans would want ("all that energy and those atoms, I need them to make more computronium, sorry").

I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.

Comment author: MichaelGR 20 January 2010 03:10:44PM 2 points [-]

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable. If that happens, I think there's a lot more chances that humanity will be destroyed quickly and you won't be woken up than that a stable but "worse than death" world will form and decide to wake you up.

But maybe I'm missing something that makes such "worse than death" worlds plausible.

Comment author: wedrifid 20 January 2010 03:34:27PM 2 points [-]

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable.

I think you're right. The main risk would be Friendly to Someone Else AI.