Grant comments on The Truly Iterated Prisoner's Dilemma - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (85)
If "rational" actors always defect and only "irrational" actors can establish cooperation and increase their returns, this makes me question the definition of "rational".
However, it seems like the priors of a true prisoner's dilemma are hard to come by (absolutely zero knowledge of the other player and zero communication). Don't we already know more about the paperclip maximizer than the scenario allows? Any superintelligence would understand tit-for-tat playing, and know that other intelligences should understand it as well. Knowing this, it seems like it would first try a tit-for-tat strategy when playing with an opponent of some intelligence.
If the intelligence knew the other player was stupid, it wouldn't bother. Humans don't try and cooperate with non-domesticated wolves or hawks when they hunt, after all.
Eliezer,
I am guilty of the above. In the one-shot PD there is no communication, and no chance for cooperation to help. In the iterated PD, there is a chance the other player will be playing tit-for-tat as well.