loqi comments on The mind-killer - Less Wrong

23 Post author: ciphergoth 02 May 2009 04:49PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (151)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_Nesov 03 May 2009 11:58:38AM *  8 points [-]

That's a very strange perspective. Other threats are good in that they are stupid, so they won't find you if you colonize space or live on an isolated island, or have a lucky combination of genes, or figure out a way to actively outsmart them, etc. Stupid existential risks won't methodically exterminate every human, and so there is a chance for recovery. Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet. (Indifference works this way too, it's the application of power indifferent to humankind that is methodical, e.g. Paperclip AI.)

Comment author: loqi 04 May 2009 01:30:23AM -1 points [-]

Unfriendly AI, on the other hand, won't go away, and you can't hide from it on another planet.

Not on another planet, no. But I wonder how practical a constantly accelerating seed ship will turn out to be.