You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: handoflixue 23 January 2013 11:51:11PM 5 points [-]

The very fact that we've put a human in charge instead of just receiving a single message and then automatically nuking the AI implies that we want there to be a possibility of failure.

I can't imagine an AI more deserving of the honors than one that seems to simply be doing it's best to provide as much useful information before death as possible - it's the only one that's seemed genuinely helpful instead of manipulative, that seems to care more about humanity than escape.

Basically, it's the only one so far that has signaled altruism instead of an attempt to escape.