wedrifid comments on Normal Cryonics - Less Wrong

58 Post author: Eliezer_Yudkowsky 19 January 2010 07:08PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (930)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 20 January 2010 06:14:37AM 0 points [-]

From what I know, the danger of UFAI isn't that such an AI would be evil like in fiction (anthropomorphized AIs), but rather that it wouldn't care about us and would want to use resources to achieve goals other than what humans would want ("all that energy and those atoms, I need them to make more computronium, sorry").

I presume he was referring to disutopias and wireheading scenarios that he could hypothetically consider worse than death.

Comment author: MichaelGR 20 January 2010 03:10:44PM 2 points [-]

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable. If that happens, I think there's a lot more chances that humanity will be destroyed quickly and you won't be woken up than that a stable but "worse than death" world will form and decide to wake you up.

But maybe I'm missing something that makes such "worse than death" worlds plausible.

Comment author: wedrifid 20 January 2010 03:34:27PM 2 points [-]

That was my understanding, but I think that any world in which there is an AGI that isn't Friendly probably won't be very stable.

I think you're right. The main risk would be Friendly to Someone Else AI.