Vladimir_Nesov comments on The AI in a box boxes you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (378)
Defeating Dr. Evil with self-locating belief is a paper relating to this subject.
(It specifically uses the example of creating copies of someone and then threatening to torture all of the copies unless the original co-operates.)
The conclusion:
And the error (as cited in the "conclusion") is again in two-boxing in Newcomb's problem, responding to threats, and so on. Anthropic confusion is merely an icing.