cousin_it comments on Robust Cooperation in the Prisoner's Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (145)
Patrick, congrats on writing this up! It's nice to see MIRI step up its game.
Will two PrudentBots cooperate if they're using theories of different strength?
Yes! We checked that both formally and using the fixed-point evaluator. I should add that to the draft.
EDIT: Whoops, it's a little more complicated than I remember. They find mutual cooperation iff the system PrudentBot1 uses to prove (its opponent defects against DefectBot) is strong enough to prove the consistency of the system PrudentBot2 uses to prove (its opponent cooperates with it), and vice versa.
OK, I see. So, unlike FairBots, a PrudentBot using PA and PA+1 won't cooperate with a PrudentBot using PA+1 and PA+2. I wonder if that can be fixed?
If I were submitting a modal agent, I'd probably use a PrudentBot that uses PA to look for mutual cooperation but PA+N (for some large N) to look for defection against DefectBot. There's not a simple downside to that variant.
In particular, Prudent(0,100)Bot finds mutual cooperation with Prudent(0,1)Bot, Prudent(99,1000)Bot, etc.
Or breakout the stronger systems, e.g., ZFC possibly with some inaccessible cardinals thrown in for good measure.
That would be even better in practice, but it wouldn't be expressible in the modal formalism.