Alicorn comments on What if AI doesn't quite go FOOM? - Less Wrong

11 Post author: Mass_Driver 20 June 2010 12:03AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (186)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 20 June 2010 06:09:11AM 3 points [-]

Why? Predicting my actions doesn't make them actions I don't want to take. Predicting I'll eat a sandwich if I want one doesn't hurt me; and if others can predict that I'll cooperate on the prisoner's dilemma iff my opponent will cooperate iff I'll cooperate, so much the better for all concerned.

Can you give an example of a case where being predictable would hurt someone who goes about choosing actions well in the first place? Note that, as with the PD thing above, actions are dependent on context; if the prediction changes the context, then that will already be factored into an accurate prediction.

Comment author: cousin_it 20 June 2010 08:02:08AM *  4 points [-]

Can you give an example of a case where being predictable would hurt someone who goes about choosing actions well in the first place?

Good question. Your intuition is correct as long as your actions are chosen "optimally" in the game-theoretic sense. This is one of the ideas behind Nash equilibria: your opponent can't gain anything from knowing your strategy and vice versa. A caveat is that the Nash equilibria of many games require "mixed strategies" with unpredictable randomizing, so if the opponent can predict the output of your random device, you're in trouble.

Comment author: timtyler 20 June 2010 07:03:41AM 2 points [-]

If you can accurately predict the action of a chess player faster than they can make it, then you have more time to think about your response. There are cases where this can make a difference - even if they happen to play perfectly.

Comment author: Unknowns 20 June 2010 06:16:24AM -2 points [-]

Alicorn, your note about the PD implies that it is universally the case that there is some one action that will benefit you even if others predict it. There is no reason to think that this is the case; and if there is even one instance where doing what others predict you will do is harmful, then being universally predictable is a weakness.