We study a program game version of the Prisoner's Dilemma, i.e., a two-player game in which each player submits a computer program, the programs are given read access to each other's source code and then choose whether to cooperate or defect. Prior work has introduced various programs that form cooperative equilibria against themselves in this game. For example, the ϵ-grounded Fair Bot cooperates with probability ϵ and with the remaining probability runs its opponent's program and copies its action. If both players submit this program, then this is a Nash equilibrium in which both players cooperate. Others have proposed cooperative equilibria based on proof-based Fair Bots, which cooperate if they can prove that the opponent cooperates (and defect otherwise). We here show that these different programs are compatible with each other. For example, if one player submits ϵ-grounded Fair Bot and the other submits a proof-based Fair Bot, then this is also a cooperative equilibrium of the program game version of the Prisoner's Dilemma.
It would be interesting to run this with asymmetric power (encoded as more cpu for recursive analysis before timeout, or better access to source code, or a way to "fake" one's source code with some probability). I'd predict that this flips the equilibrium to "D" pretty easily. The mutual-perfect-strategy-knowledge here is a VERY high bar for applicability to any real situation.
yeah I suspect that you might never be totally sure another part of a computer is what it claimed to be. you can be almost certain but totally certain? perfect?