drnickbone comments on Interlude for Behavioral Economics - Less Wrong

49 Post author: Yvain 06 July 2012 08:12PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (49)

You are viewing a single comment's thread.

Comment author: drnickbone 07 July 2012 09:17:21PM *  2 points [-]

Players seemed to want to play “Friend” if and only if they expected their opponents to do so. This is not rational, but it accords with the “Tit-for-Tat” strategy hypothesized to be the evolutionary solution to Prisoner's Dilemma.

Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.

It is reputational systems which reward correct prediction (co-operate if and only if you predict that the other player will co-operate this time). That is because the reputational damage from defecting against a co-operator is large : the co-operator gains sympathy; the defector risks punishment or reduced co-operation from other observers. Whereas if a person who is generally known to co-operate defects against another defector, there is generally not a reputational hit (indeed there is probably a slight uplift to reputation for predicting correctly and not letting the defector get away with it).

Super-rational players co-operate if and only if the other player is super-rational. If this was the strategy that humans in fact followed (i.e. there were ways in which super-rational players could reliably recognize each other) then co-operation would be pretty near universal among humans in PDs. But it isn't.

The empirical evidence (from this show, and other studies) is that humans play a reputational strategy rather than pure Tit-for-Tat or super-rational strategy. It appears to be what humans do, and there is a fairly convincing case it is what we're adapted to do.

EDIT: The other evidence you quote in your article is very interesting though:

The results: If you tell the second player that the first player defected, 3% still cooperate (apparently 3% of people are Jesus). If you tell the second player that the first player cooperated.........only 16% cooperate. When the same researchers in the same lab didn't tell the second player anything, 37% cooperated.

That suggests a mixture between reputational and super-rational strategies with a bit of "pure co-operate" thrown in as well. If there were a pure super-rational strategy then no-one would co-operate after hearing for sure that the other player had already co-operated. (This is unless they both knew for sure going into the game that the other player was super-rational; then they could both commit to co-operate regardless; it is equivalent in that case to counterfactual mugging, or to Newcomb with transparent boxes). Whereas if there were a pure reputational strategy, then knowing that the other player had co-operated would increase the probability of co-operating, not reduce it. Interesting.

I'm wondering if there are any game-theory models which predict a mixed equilibrium between super-rational and reputation, and whether the equilibrium allows a small % of "pure co-operators" into the mix as well?

Comment author: Strange7 08 July 2012 05:18:21AM 0 points [-]

Pure co-operate can be a reasonable strategy, even with foreknowledge of the opponent's defection in this round, if you think your opponent is playing something close to tit-for-tat and expect to play many more rounds with them.

Comment author: wedrifid 07 July 2012 10:26:26PM 0 points [-]

Same comment as on your previous article in the series. Tit-for-Tat co-operates with a player who co-operated last time, not with a partner that it anticipates will co-operate this time.

Agree again. Yvain is misusing terms and misrepresenting evolutionary strategies. This sequence is vastly overrated.