Related to: The True Prisoner's Dilemma, Newcomb's Problem and Regret of Rationality
In The True Prisoner's Dilemma, Eliezer Yudkowsky pointed out a critical problem with the way the Prisoner's Dilemma is taught: the distinction between utility and avoided-jail-time is not made clear. The payoff matrix is supposed to represent the former, even as its numerical values happen to coincidentally match the latter. And worse, people don't naturally assign utility as per the standard payoff matrix: their compassion for the friend in the "accomplice" role means they wouldn't feel quite so good about a "successful" backstabbing, nor quite so bad about being backstabbed. ("Hey, at least I didn't rat out a friend.")
For that reason, you rarely encounter a true Prisoner's Dilemma, even an iterated one. The above complications prevent real-world payoff matrices from working out that way.
Which brings us to another unfortunate example of this misunderstanding being taught.
Recently, on the New York Times's "Freakonomics" blog, Professor Daniel Hamermesh gleefully recounts a recent experiment he performed (which he says he does often) on students in his intro economics course, which is basically the same as the Prisoner's Dilemma (henceforth, PD).
Now, before going further, let me make clear that Hamermesh is no small player. Just take a look at all the accolades and accomplishments listed on his Wikipedia page or his university page CV. So, this is a teaching of a professor at the top of his field, so it's only with hesitation that I proceed further to allege that he's Doing It Wrong
Hamermesh's variant of the PD is to pick eight students and auction off a $20 bill to them, with the money split evenly across the winners if there are multiple highest bids. Here, cooperation corresponds to adhering to a conspiracy where everyone agrees to make the same low bid and thus a big profit. Defecting corresponds to breaking the agreement and making a slightly higher bid so you can take everything for yourself. If the others continue to cooperate, their bid is lower and they get nothing.
Here is how Hamermesh describes the result (italics mine, bold in the original):
Today seven of the students stuck to the collusive agreement, and each bid $.01. They figured they would split the $20 eight ways, netting $2.49 each. Ashley, bless her heart, broke the agreement, bid $0.05, and collected $19.95. The other 7 students booed her, but I got the class to join me in applauding her, as she was the only one who understood the game.
The game? Which game? There's more than one game going on here! There's the neat little well-defined, artificial setup that Professor Hamermesh has laid out. On top of that, there's the game we better know as "life", in which the later consequences of superficially PD-like scenarios cause us to assign different utilities to successful backstabbing (defecting when others cooperate). There's also the game of becoming the high-status professor's Wunderkind. And while Ashley (whose name he bolded for some reason) may have won the narrow, artificial game, she also told everyone there that, "Trusting me isn't such a good idea." In other words, the kind of consequence we normally worry about in our everyday lives.
For this reason, I left the following comment:
No, she learned how to game a very narrow instance of that type of scenario, and got lucky that someone else didn’t bid $0.06.
Try that kind of thing in real life, and you’ll get the social equivalent of a horse’s head in your bed.
Incidentally, how many friends did Ashley make out of this event?
I probably came off as more "anticapitalist" or "collectivist" than I really am, but the point is important: betraying your partners has long-term consequences which aren't apparent when you only look at the narrow version of this game.
Hamermesh's point was actually to show the difficulty of collusion in a free market. However, to the extent that markets can pose barriers to collusion, it's certainly not because going back on your word will consistently work out in just the right way as to divert a huge amount of utility to yourself -- which happens to be the very reason Ashley "succeded" (with the professor's applause) in this scenario. Rather, it's because the incentives for making such agreements fundamentally change; you are still better off maintaining a good reputation.
Ultimately, the students learned the wrong lesson from an unrealistic game.
I'm considering a top-level post (my first) on experiential games and a little background on how they might be worthwhile for LWers - there have been a few reports of experiences, such as the estimation/calibration game at one meetup, but I'm feeling that a little detail on the constructivist approach and practical advice on how to set up such games might be useful.
I use experiential games quite a bit; one that I remember fondly from a few years ago was adapted from Dietrich Doerner's /The Logic of Failure/ - the one where you are to control a thermostat. Doerner's account of people actually playing the game is enlightening, with many irrational reactions on display. But reading about it is one thing, and actually playing the game quite another, so I got a group of colleagues together and we gave it a try. By the reports of all involved it was one of the most effective learning experiences they'd had.
An experiential learning game focusing on the basics of Bayesian reasoning might be a valuable design goal for this community - one I'd definitely have an interest in playing.
By all means write it, this stuff sounds very interesting.
Possibly related are the PCT demo games mentioned on LW before. I imagine a Bayesian learning game to be similar in spirit (better implement it in Flash rather than Java, though). Also tangentially related are the cognitive testing games.