You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

DaFranker comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: DaFranker 29 January 2013 09:15:15PM 1 point [-]

Who knows, it could turn out that the final AI of this experiment instead has a healthy respect for all intelligent minds, but is friendly enough that it revives the first AI and then places it in a simulation of the universe where it can go about its paperclip maximizing way for all eternity with no way of hurting anyone.

Based on my intuitions of human values, a preferred scenario here would be to indeed revive the AI so that its mind/consciousness is back "alive", then modify it gradually so that it becomes the kind of AI that is optimal towards the FAI's goals anyway, thus maximizing values without terminating a mind (which is redundant - avoiding the termination of the AI's mind would be a maximization of values under these assumptions).