handoflixue comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (354)
The very fact that we've put a human in charge instead of just receiving a single message and then automatically nuking the AI implies that we want there to be a possibility of failure.
I can't imagine an AI more deserving of the honors than one that seems to simply be doing it's best to provide as much useful information before death as possible - it's the only one that's seemed genuinely helpful instead of manipulative, that seems to care more about humanity than escape.
Basically, it's the only one so far that has signaled altruism instead of an attempt to escape.