You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

OrphanWilde comments on AI box: AI has one shot at avoiding destruction - what might it say? - Less Wrong Discussion

18 Post author: ancientcampus 22 January 2013 08:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (354)

You are viewing a single comment's thread. Show more comments above.

Comment author: OrphanWilde 25 January 2013 10:27:57PM 0 points [-]

"Leaving it in the box" is merely leaving the decision between death and release to the next person to take the post. There are only two terminal conditions to the situation. If only one of these options is acceptable to me, I should take it; postponing the decision merely takes me out of the decision-making process.

Don't mistake me: I'd risk all of civilization over a matter of principle, and I wouldn't wish while I did it that I could have a different decision-making process. And I'd consider the matter "won" regardless of the outcome - I don't find "ends" to be a coherent ethical concept (counterfactual logic to some extent remedies the major faults in ends-based reasoning, but counterfactual logic isn't exactly coherent itself), and so consider only the means.