You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Document comments on Don't plan for the future - Less Wrong Discussion

1 Post author: PhilGoetz 23 January 2011 10:46PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (50)

You are viewing a single comment's thread. Show more comments above.

Comment author: Document 26 January 2011 04:07:42AM *  1 point [-]

Also, even if there are a lot o unfriendly AIs out there, a friendly one would still vastly improve our fate, whether by fighting off the unfriendly ones, or reaching an agreement with them to mutual benefit, or rescue simulations.

One potential option might be to put us through quantum suicide - upload everyone into static storage, then let them out only in the branches where the UFAI is defeated. Depending on their values and rationality, both AIs could do the same thing in order to each have a universe to themselves without fighting. That could doom other sentients in the UFAI's branch, though, so it might not be the best option even if it would work.