cousin_it comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 20 June 2010 06:09:11AM 3 points [-]

Why? Predicting my actions doesn't make them actions I don't want to take. Predicting I'll eat a sandwich if I want one doesn't hurt me; and if others can predict that I'll cooperate on the prisoner's dilemma iff my opponent will cooperate iff I'll cooperate, so much the better for all concerned.

Can you give an example of a case where being predictable would hurt someone who goes about choosing actions well in the first place? Note that, as with the PD thing above, actions are dependent on context; if the prediction changes the context, then that will already be factored into an accurate prediction.

Comment author: cousin_it 20 June 2010 08:02:08AM *  4 points [-]

Can you give an example of a case where being predictable would hurt someone who goes about choosing actions well in the first place?

Good question. Your intuition is correct as long as your actions are chosen "optimally" in the game-theoretic sense. This is one of the ideas behind Nash equilibria: your opponent can't gain anything from knowing your strategy and vice versa. A caveat is that the Nash equilibria of many games require "mixed strategies" with unpredictable randomizing, so if the opponent can predict the output of your random device, you're in trouble.