MinibearRex comments on Bayesians vs. Barbarians - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (270)
I know this post is long, long dead but:
Isn't this a logical impossibility? To have knowledge is to contain it in your source code, so A is contained in B, and B is contained in A...
Alternatively, I'm considering all the strategies I could use, based on looking at my opponent's strategy, and one of them is "Cooperate only if the opponent, when playing against himself, would defect."
"Common knowledge of each other's rationality" doesn't seem to help. Knowing I use TDT doesn't give someone the ability to make the same computation I do, and so engage TDT. They have to actually look into my brain, which means they need a bigger brain, which means I can't look into their brain. If I meet one of your perfectly rational agents who cooperates on true prisoners dilemma, I'm going to defect. And win. Rationalists should win.
The problem with your plan is that TDT agents don't always cooperate. I will only cooperate if I have reason to believe that you and I are similar enough that we will decide to do the same thing for the same reasons. I hate to burst your bubble, but you are not the first person in all of recorded history to think of this. Other people are allowed to be smart too. If you come up with a clever reason to defect when playing against me, it is very possible (perhaps even likely, although I don't know you all that well) that I will think of it too.