shinoteki comments on Robust Cooperation in the Prisoner's Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
I have two proposed alternatives to PrudentBot.
If you can prove a contradiction, defect. Otherwise, if you can prove that your choice will be the same as the opponent's, cooperate. Otherwise, defect.
If you can prove that, if you cooperate, the other agent will cooperate, and you can't prove that if you defect, the other agent will cooperate, then cooperate. Otherwise, defect.
Both of these are unexploitable, cooperate with themselves, and defect against CooperateBot, if my calculations are correct. The first one is a simple way of "sanitizing" NaiveBot.
The second one is exactly cousin_it's proposal here.
Should this be "If you can prove that you will cooperate, defect"? As it is, I don't see how this prevents cooperation with Cooperatebot, unless the agent uses an inconsistent system for proofs.
It kills the Lobian argument, I believe, since this implication "if there's a proof that you cooperate, then cooperate " is no longer true. Instead, here's a Lobian argument for defection:
Suppose there is a proof that you defect. Then either there is a proof of contradiction, or there is no proof that your move is the same as your opponent's. Either way, you defect.