Variants I'd like to see:
1) You can observe rounds played by other bots.
2) You can partially observe rounds played by other bots.
3) (The really interesting one.) You get a copy of the other bot's source code and are allowed to analyze it. All bots have 10,000 instructions per turn, and if you run out of time the round is negated (both players score 0 points). There is a standard function for spending X instructions evaluating a piece of quoted code, and if the evaled code tries to eval code, it asks the outer eval-ing function whether it should simulate faithfully or return a particular value. (This enables you to say, "Simulate my opponent, and if it tries to simulate me, see what it will do if it simulates me outputting Cooperate.")
I just hacked up something like variant 3; haven't tried to do anything interesting with it yet.
if you run out of time the round is negated (both players score 0 points).
This makes the game matrix bigger for no reason. Maybe replace this with "if you run out of time, you automatically defect"?
There is a standard function for spending X instructions evaluating a piece of quoted code, and if the evaled code tries to eval code, it asks the outer eval-ing function whether it should simulate faithfully or return a particular value.
Haha, this incentivizes players to reimplement eval themselves and avoid your broken one! One way to keep eval as a built-in would be to make code an opaque blob that can be analyzed only by functions like eval. I suggested this version a while ago :-)
If you build your own eval, and it returns a different result than the built in eval, you would know you're being simulated
(This enables you to say, "Simulate my opponent, and if it tries to simulate me, see what it will do if it simulates me outputting Cooperate.")
But it will not be simulating "you", it will be simulating the game under alternative assumptions of own behavior, and you ought to behave differently depending on what those assumptions are, otherwise you'll think the opponent will think defection is better, making it true.
I've written a pretty good program to complete variant 3 but I'm such a lurker on this site i lack the nessesary karma to submit it as a article. So here it is as a comment instead:
Inspired by prase's contest, (results here) and Eliezer_Yudkowsky's comment, I've written a program that plays iterated games of prisoners dilemma, with the cravat that each player can simulate his opponent. I'm now running my own contest. You can enter your onwn strategy by sending it to me in a message. Deadline to enter is Sunday September 25. The contest itself will run shortly there after.
Each strategy plays 100 iterations against each other strategy, with cumulative points. Each specific iteration is called a turn, each set of 100 iterations is called a match. (thanks prase for the terminology). Scoring is CC: (4,4) DC: (7,0) CD: (0,7) DD: (2,2).
Every turn, each strategy receives a record of all the moves it and its opponent made this match, a lump of time to work and its opponents code. Strategies can't review enemy code directly but they can run it through any simulations they want before deciding their move, within the limits of the amount of time they have to work.
Note that strategies can not ...
The most reliable fix to the recursion problem that I can think of is to implement a hard resource limit, whereby if you spend (say) 9500 of 10000 processing units on analysis, then you drop the analysis and implement a dumb strategy with whatever juice you have left. Probably wouldn't take much more than a simple counter being incremented in a bottom-level loop somewhere, as long as you're looking at your opponent's source through a level of indirection rather than just jumping straight into it. Some analysis strategies might also converge on a solution rather than finding an exact one, although this would be easier and more reliable with behavioral than with source analysis.
However, trapdoors like this are exploitable: they give agents an incentive to nearly but not quite run out the clock before making a decision, in hopes that their opponents will give up and fall back on the simple and stupid. The game theory surrounding this seems to have PD-like characteristics in its own right, so it may just have pushed the relevant game-theoretical decisions up to the metagame.
There is actually a human algorithm in the cooperate/defect space - which is to distrust anyone who seems to spend a long time making a cooperate decision. Anyone who has a really quick, simple way of deciding to cooperate is OK - anyone who takes a long time is thinking about whether you will defect, or whether they should defect - and you should probably find someone else to trust. This is one of the good things about Tit for tat - it passes this test.
The reply is (to my great surprise, I thought this bit isn't much important part of that strategy) much better. It will earn more than 7200 points instead of 3200 before the change.
It looks like malthrin's bot I may have won only because it defected on the last 2 turns. There were enough defect-on-the-last-turn bots for defecting on the last two to be a smart approach, and of the defect-on-the-last-two bots malthrin's was the closest to tit-for-tat.
In the longer evolutionary run, the winning strategy (O) was the one that defected on the last 3 turns. It stuck around until the rest of the population consisted of only defect-on-the-last-two bots (I and C4), and then it took over.
I'd like to see you re-run this competition, adding in a few variants of tit-for-tat which always defect on the last n turns (for n=2,3,4) to see whether I and O are benefiting from their other rules or just from their end-of-game defections.
The tournament models natural selection, but no changes and therefore evolution occurs.
Idea: everyone has access to a constant 'm', which they can use in their bot's code however they like. m is set by the botwriter's initial conditions, then when a bot has offspring in the natural selection tournament, one-third of the offspring have their m incremented by 1, one-third have theirs decremented by 1, and one third has their m remain the same. In this manner, you may plug m into any formulas you want to mutate over time.
In the first PD simulation I made with five strategies which I got from my friends I applied such a model, albeit with random mutations rather than division of descendants into thirds. The results were not as interesting as I had expected. Only one strategy showed non-trivial optimum (it was basically TfT but defect after turn n and the population made a sort of gaussian distribution around n=85). All other strategies ended up with extremal (and quite obvious) values of the parameter.
I just want to point out that it's probably not good to use this as basis to see if LessWrong improves performance in these games, because the strategy I submitted (and I suspect some other people) wasn't the best I could think of, it was just cute and simple. I did it because I got the feeling that this tournament was for fun, not for real competition or rationality measurement.
It's normally bad form to just write a comment saying "wow, this is awesome" - but I thought an upvote wasn't enough.
So:
Wow, this is awesome. Thank you for doing this and sharing the results.
This definitely seems like main material to me. Thanks for putting it together and for the very nice summary of results.
Have you considered re-running the last scenario of 1000 generations with variances in the initial populations? For example starting with Vengeful strategies being twice as common as the other strategies. It would be interesting to see if there are starting conditions that would doom otherwise successful strategies.
Excellent illustration of the Exploration vs Exploitation tradeoff, and I absolutely love how Second Chance got piggybacked by strategy I in the longer selection run!
Also, it would be curious to see what would happen if one allowed strategies to actually evolve.
I don't have the impression that the participants did much of the relevant math. Few strategies were optimized and a considerable number weren't sane at all. At this point, delving into the depths of truly evolutionary IPD tournaments will not yield as much insight on the whole matter as it could.
There's a number of rules I have put together for developing successful strategies as well as theoretical observations on these kinds of tournament in general; if there is interest in them I can post them in the discussion area.
I think message corruption would be the most interesting change. Grim trigger doesn't work as well if there is a 1% chance of accidently defecting.
I wonder how this was affected by the fact that the overall scenario was in fact zero-sum, in that in order for a strategy to make more copies of itself it had to cause there to be less copies of other strategies?
It would be interesting to see a rematch where each strategy gets one copy of itself in the next generation for every N points it earns in the current generation, with no limit on the total number of copies.
It should be possible to efficiently compute the deterministic behavior of an infinite population under these rules. In particular, the presence of a slow strategy should not make it hard to simulate a large population for lots of generations. (Assuming strategies get to depend only on their current opponent, not on population statistics.)
First find the expectation of the scores for an iterated match between each pair of strategies: If they're both deterministic, that's easy. If there's some randomness but both compress their dependence on history down to ...
The obvious way to make a cliquebot which would have a chance in this environment:
Tit for tat turns 1-94. If that was all cooperation, then turns 95, 96 are D, C; otherwise, continue with TFT. Turns 97-100 are C if all matches so far have agreed (facing a clone), or D if not.
This kind of thing would also help prevent infinite escalation of the last-N-turns defection.
I call it "Afterparty" if there's ever a future tournament.
You wrote of U: "This one is a real beast, with code more that twice as long as that of its competitors. It actually models the opponent's behaviour as a random process specified by two parameters: probability of cooperation when the second strategy (which is the RackBlockShooter) also cooperates, and probability of cooperation when RBS defects. This was acausal: the parameters depend on RBS's behaviour in the same turn, not in the preceding one."
Isn't that just a simple bug in U? Probably Nic Smith meant it to model the opponent's responses to U's behavior on the previous turn, not the correlations between their actions on a given turn.
I suggest an everlasting battle of 1024 (or something) strategy bots. They die and survive at the more or less present tournament laws. Everybody has a right to come with a viable bot and throw it inside, let say once a day. Provided, that a bot with exactly the same code isn't already there. The upper limit for the code size is 1024 (or something) bytes. The current menagerie is visible here or somewhere else. Along with the most vital statistics.
There is one more lesson that I have learned: the strategies which I considered most beautiful (U and C1) played poorly and were the first to be eliminated from the pool. Both U and C1 tried to experiment with the opponent's behaviour and use the results to construct a working model thereof. But that didn't work in this setting: while those strategies were losing points experimenting, dumb mechanical tit-for-tats were maximising their gain. There are situations when the cost of obtaining knowledge is higher than the knowledge is worth, and this was one of such situations.
I think the problem with U was that in practice it quickly wound up playing all defect against tit-for-tat style strategies.
Having a known number of rounds seems like a problem to me. Everybody wants to defect on the last round. It might be interesting to retry with a number of rounds chosen randomly from 100 to 110 or where after 100 each round has a 25% chance of being the final round. However a large number of matches might be needed to offset the advantages of playing in some matches with a higher than average number of rounds.
The way it goes now, we're basically playing rock-paper-scissors.
"Tit-for-tat until round 100" is beat by "tit-for-tat until 99", which is beat by "tit-for-tat until 98", which is beat by "tit-for-tat until 97" ect. At some point, "tit-for-tat until 100" becomes a winning strategy again, but only once the "tit-for-tat until [some arbitrarily low number]" strategies have become predominant.
Are there plans for another such tournament? There should be. I think it might be neat to see this as a quarterly happening or something.
A's abuse of rules is, in my opinion, not slight. If everyone tried to stuff the ballot box in the same way... in practice it seems the main difference would be more Fuck Them All, but in theory it would lead to a skewed and uninformative tournament, decided more by popular vote than by strategic merit.
This raises the question of what sort of psychology might lead the author to cooperate on the object level while defecting on the meta level.
Then we have a special "class" of DefectBots (a name stolen from Orthonormal's excellent article) consisting of only one single strategy. But this is by far the most popular strategy — I have received it in four copies. Unfortunately for DefectBots, according to the rules they have to be included in the pool only once.
A shame the defectors didn't cooperate a little better! A few more of us and some noise and we'd have been unstoppable. ;)
Thanks for doing this!
My prediction was wrong- looking back, there were several pieces of evidence I got after my prediction that should have caused me to issue a retraction, rather than a modification. I recall the part in parentheses being an edit- and once I realized that I should have adjusted my confidence rather than my statement.
I'm curious how things go if you only include the 11 strategies submitted by LWers with at least 500 karma.
If we do this again, I'm almost tempted to submit a really perverse example which does the opposite of tit for tat. So, defect on the first turn, and cooperate on turn n iff they defected on n-1. I'm pretty sure this would go really badly.
Hmm, I notice also that no one tried naive always cooperate. Would be interesting to see how well that does.
I'd be very interested in seeing the same sort of thing where the number of rounds is randomly chosen and they don't know how many rounds they are going to go up against.
Note that the CliqueBots failure is not a good reason to think that Cliquebotish like strategies will fail in evolutionary contexts in real life. In particular, life generally has a lot of conditions to tell how closely related something is to them (such as skin coloration). In this sort of context cliquebots are losing in part because they need at one point to defect against themselves to...
A cheap talk round would favor CliqueBots.
That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?
That O only took off once other variants were eliminated suggests a rock-paper-scissors relationship. But I suspect O only lost early on because parts of it's ruleset are too vulnerable to the wider bot population. So which rules was O following most when it lost/ against early opponents, and which rules did it use to beat I and C4?
O is tit-for-tat with 3 defections at the end (TFT3D). It has extra rules to help with weird players and randoms. Those don't matter much once the weird guys are gone and it becomes a game between the tit-for-tats so ignore those. There
I is TFT2D and roughly speaking so is C4. All the next players who are slightly behind the TFT2Ds are approximations of TFT1D (with random killer noise that we can again ignore once the real fight starts) then there are plenty of TFT0Ds as well.
Consider the following match ups:
Before that, while the pure tit-for-tats (TFT0D) are alive we of course also have * TFT0D vs TFT1D, TFT2D and TFT3D: TFT0D earns 297, 296, 295 respectively while the greater cheaters earn the 302...
Here's a lecture (from Stanford's Human Behavioral Biology course) which talks about Axelrod's original tournaments and the various strategies that emerged then (starts at 45:00 min, until 1:00:00 h, then he talks how this applies to biological systems).
Thanks, this is all fascinating stuff.
One small suggestion: if you wanted to, there are ways you could eliminate the phenomenon of 'last round defection'. One idea would be to randomly generate the number of rounds according to an exponential distribution. This is equivalent to having, on each round, a small constant probability that this is the last round. To be honest though, the 'last round' phenomenon makes things more rather than less interesting.
Other ways to spice things up would be: to cause players to make mistakes with small probability (say a 1% chance of defecting when you try to co-operate, and vice versa); or have some probability of misremembering the past.
In the interest of seeing what strategies like U and C1 could do regarding experimenting with the opponent's behavior, I would be interested if there were a prisoner's dilemma variant that had a second score that is worth less points but can still add up, enabling a strategy to come out on top if it comes down to neck-and-neck. Real life kind of operates like that: there are ways to win little victories over others, but those strategies are often telling about your character.
Very interesting results! The pool of strategies influences the success of each individual strategy!
If this ever happens again I'd make one for the long-term evolutionary one that tries to learn strategies U-style, and then remembers what it learned in future rounds. If that's allowed.
Wait, why was a fixed number of rounds used?
Edit: So, all the submitted strategies were deterministic? Or did I miss one? Was this a requirement?
Edit again: Oops, scratch that, I see now the control group included randomized strategies.
What to make of "Prisoner's Dillema" as an analogy for life and society? Some people see it as a close analogy for life, and use it to explain why it's so hard to get individuals to cooperate for mutual benefit, rather than acting in narrow self-interest. But this is missing something extremely important. That is, the ability of the players to change the rules. In fact, prisoners do exactly that. If you "defect" and rat out your partner in crime, you may get a reduced sentence, but you're likely to be marked as a "snitch"...
About two weeks ago I announced an open competition for LessWrong readers inspired by Robert Axelrod's famous tournaments. The competitors had to submit a strategy which would play an iterated prisoner's dilemma of fixed length: first in the round-robin tournament where the strategy plays a hundred-turn match against each of its competitors exactly once, and second in the evolutionary tournament where the strategies are randomly paired against each other and their gain is translated in number of their copies present in next generation; the strategy with the highest number of copies after generation 100 wins. More details about the rules were described in the announcement. This post summarises the results.
The Zoo of Strategies
I have received 25 contest entries containing 21 distinct strategies. Those I have divided into six classes based on superficial similarities (except the last class, which is a catch-all category for everything which doesn't belong anywhere else, something like adverbs within the classification of parts of speech or now defunct vermes in the animal kingdom). The first class is formed by Tit-for-tat variants, probably the most obvious choice for a potentially successful strategy. Apparently so obvious that at least one commenter declared high confidence that tit-for-tat will make more than half of the strategy pool. That was actually a good example of misplaced confidence, since the number of received tit-for-tat variants (where I put anything which behaves like tit-for-tat except for isolated deviations) was only six, two of them being identical and thus counted as one. Moreover there wasn't a single true tit-for-tatter among the contestants; the closest we got was
A (-, -): On the first turn of each match, cooperate. On every other turn, with probability 0.0000004839, cooperate; otherwise play the move that the opponent played on the immediately preceding turn.
(In the presentation of strategies, the letter in bold serves as a unique identificator. The following parentheses include the name of the strategy — if the author has provided one — and the name of the author. I use the author's original description of the strategy when possible. If that's too long, an abbreviated paraphrase is given. If I found the original description ambiguous, I may give a slightly reformulated version based on subsequent clarifications with the author.) The author of A was the only one who requested his/her name should be withheld and the strategy is nameless, so both arguments in the bracket are empty. The reason for the obscure probability was to make the strategy unique. The author says:
This was perhaps a slight abuse of rules, but since I am responsible for failing to make the rules immune to abuse, I had to accept the strategy as it is. Anyway, it turned out that the trivial variation was needless for the stated purpose.
The remaining strategies from this class were more or less standard with B being the most obvious choice.
B (-, Alexei): Tit-for-Tat, but always defect on last turn.
C (-, Caerbannog): Tit-for-tat with 20% chance of forgiving after opponent's defection. Defect on the last turn.
D (-, fubarobfusco and DuncanS): Tit-for-tat with 10% chance of forgiving.
E (-, Jem): First two turns cooperate. Later tit-for-tat with chance of forgiving equal to 1/2x where x is equal to number of opponent's defections after own cooperations. Last turn defect.
The next category of strategies I call Avengers. The Avengers play a nice strategy until the opponent's defective behaviour reaches a specified threshold. After that they switch to irrevocable defection.
F (-, Eugine_Nier): Standard Tit-for-Tat with the following modifications: 1) Always defect on the last move. 2) Once the other player defects 5 times, switch to all defect.
G (-, rwallace): 1. Start with vanilla tit-for-tat. 2. When the opponent has defected a total of three times this match, switch to always defect for the rest of the match. 3. If the number of rounds is known and fixed, defect on the last round.
H (-, Michaelos): Tit for Tat, with two exceptions: 1: Ignore all other considerations and always defect on the final iteration. 2: After own defection and opponent's cooperation, 50 percent of the time, cooperate. The other 50 percent of the time, always defect for the rest of the game.
I (-, malthrin): if OpponentDefectedBefore(7) then MyMove=D else if n>98 then MyMove=D else MyMove=OpponentLastMove
J (Vengeful Cheater, Vaniver): Set Rage = 0. Turn 1: cooperate. Turn 2-99: If the opponent defected last round, set Rage to 1. (If they cooperate, no change.) If Rage is 1, defect. Else, cooperate. Turn 100: defect.
K (Grim Trigger, shinoteki): Cooperate if and only if the opponent has not defected in the current match.
Then we have a special "class" of DefectBots (a name stolen from Orthonormal's excellent article) consisting of only one single strategy. But this is by far the most popular strategy — I have received it in four copies. Unfortunately for DefectBots, according to the rules they have to be included in the pool only once. The peculiar name for the single DefectBot comes from wedrifid:
L (Fuck them all!, MileyCyrus and wedrifid and vespiacic and peter_hurford): Defect.
Then, we have a rather heterogenous class of Lists, whose only common thing is that their play is specified by a rather long list of instructions.
M (-, JesusPetry): Turn 1: cooperate. Turns 2 to 21: tit-for-tat. Turn 22: defect. Turn 23: cooperate. Turn 24: if opponent cooperated in turns 21 and 22, cooperate; otherwise, do whatever the opponent did on previous turn. Then tit-for-tat, the scenario from turns 22-24 repeats itself in turns 35-37, 57-59, 73-75. Turns 99 and 100: defect.
N (-, FAWS): 1. Cooperate on the first turn. 2. If the opponent has defected at least 3 times: Defect for the rest of the match. 3. Play tit for tat unless specified otherwise. 4. If the opponent has cooperated on the first 20 turns: Randomly choose a turn between 21 and 30. Otherwise continue with 11. 5. If the opponent defects after turn 20, but before the chosen turn comes up: Play CDC the next three turns, then continue with 12. 6. Defect on the chosen turn if the opponent hasn't defected before. 7. If the opponent defects on the same turn: Cooperate, then continue with 12. 8. If the opponent defects on the next turn: Cooperate, then continue with 11. 9. If the opponent defects on the turn after that (i. e. the second turn after we defected): Cooperate, then continue with 12. 10. If the opponent cooperates on every turn up to and including two turns after the defection: Defect until the opponent defects, then cooperate twice and continue with 11. 11. Defect on the last two turns (99 and 100). 12. Defect on the last two turns unless the opponent defected exactly once.
O (Second Chance, benelliott): Second Chance is defined by 5 rules. When more than one rule applies to the current situation, always defer to the one with the lower number. Rules: 1. Co-operate on move 1, defect on moves 98, 99 and 100. 2. If Second Chance has co-operated at least 4 times (excluding the most recent move) and in every case co-operation has been followed by defection from the other strategy, then always defect from now on. 3. If Second Chance has co-operated at least 8 times, and defected at least 10 times (excluding the previous move), then calculate x, the proportion of the time that co-operation has been followed by co-operation and y, the proportion of the time that defection has been followed by co-operation. If 4x < 6y + 1 then defect until that changes. 4. If the number of times the opposing strategy has defected (including the most recent move) is a multiple of 4, co-operate. 5. Do whatever the opposing strategy did on the previous move.
The next class are CliqueBots (once more a name from the Orthonormal's post). CliqueBots try to identify whether their opponent is a copy of themselves and then cooperate. If they identify a foreign opponent, they usually become pretty nasty.
P (Simple Identity ChecK, red75): Move 1: cooperate. Moves 2 to 57: tit-for-tat. Move 58 - defect. Move 59 - if moves 0-57 were coop-coop and 58 was defect-defect then coop else defect. Moves 60-100: if my move 59 was coop then tit-for-tat else defect.
Q (EvilAlliance, ArisKatsaris): Start with 5 defections. On later turns, if the opponent had cooperated at least once in the first 5 turns or defected ever since, defect; else cooperate.
The remaining strategies are Unclassified, since they have little in common. Here they come:
R (Probe & Punish, Eneasz): 1. Cooperate for a turn. 2. Cooperate for a second turn. If opponent cooperated on the second turn, continue to cooperate in every subsequent turn until the opponent defects. 3. If/when the opponent defects, defect for 12 consecutive rounds. 4. Return to 1, repeat.
S (Win-stay lose-shift, Nerzhin): It cooperates if and only if it and the opponent both played the same strategy on the last round.
The author of S says he's adopted the strategy from Nowak and Sigmund, Nature 364 pp. 56-58.
T (Tit for Two Tats, DataPacRat): As the name suggests.
U (RackBlockShooter, Nic_Smith): The code is available here.
This one is a real beast, with code more that twice as long as that of its competitors. It actually models the opponent's behaviour as a random process specified by two parameters: probability of cooperation when the second strategy (which is the RackBlockShooter) also cooperates, and probability of cooperation when RBS defects. This was acausal: the parameters depend on RBS's behaviour in the same turn, not in the preceding one. Moreover, RBS stores whole probability distributions of those parameters and correctly updates them in an orthodox Bayesian manner after each turn. To further complicate things, if RBS thinks that the opponent is itself, it cooperates in the CliqueBottish style.
And finally, there's a strategy included by default:
Z (Fully Random, default): Cooperate with 50% probability.
The Round-Robin Tournament
Each strategy had played 21 matches where a total maximum of 14,700 points could theoretically be won. A more reasonable threshold was 8,400 points: an award for any member of a pool consisting solely of nice strategies (i.e. those which never defect first). No strategy reached this optimum. The standings are given in the following table (columns from left: strategy identifier, matches won, matches drawn, matches lost, total points).
The winner is malthrin's unnamed strategy. Second and third place are occupied by Eugine_Nier's modified tit-for-tat and benelliott's Second Chance, respectively. Congratulations!
Here are results of all matches (view the picture in full size by right-clicking).
Is there something interesting to say about the results? Well, there were three strategies which performed worse than random: the DefectBot L, the CliqueBot Q (which effectively acted like a DefectBot) and the complex strategy U. As expected, very nasty strategies were unsuccessful. On the other hand, totally nice strategies weren't much successful either. No member of the set {A, D, K, R, S, T} got into the first five. Not defecting on the last turn was a mistake, defecting on the last two turns seemed even a better choice. As for the somewhat arbitrary classes, the average gains were as such:
Another probably obvious point is that in a non-zero sum game it doesn't matter too much how many opponents you beat in direct confrontation. The only undefeated strategies that won all their matches except one draw, L and Q, finished last.
The Evolutionary1 Tournament
The initial population consisted of 1,980 strategies, 90 of each kind. But that didn't last for long. The very nasty strategies started to die out rapidly and by generation 6 there were no representants of L, Q, U and Z. (I have to say I was very glad to see U extinct so early because with non-negligible number of Us in the pool the simulation run very, very slowly.) By generation 40, M, N and P were also gone. Unfortunately, 100 was too low as a number of generations to see any interesting effects associated with environmental change as different strategies become extinct (but see the next section for a longer simulation); the strategies successful in the round-robin tournament were generally successful here as well.
The graph shows the history of subpopulations (TfT variants are blue, Avengers red, Lists green, CliqueBots purple, Unclassified strategies yellow, the DefectBot gray and FullyRandom is black). The final standings (orderd by number of copies in generation 100) are summarised in the following table.
The gold and silver medals are once again awarded to malthrin and Eugine_Nier. There is a change at the third place which is now occupied by Caerbannog's 20%-forgiving tit-for-tat.
One remarkable fact is the absolute failure of CliqueBots which were without doubt designed to succeed in the evolutionary tournament. The idea was, perhaps, that CliqueBots win by cooperating among themselves and punish the others by defecting. That may be a working meta-strategy once you are dominant in the population, but for players starting as small minority it is suicidal. It would be a pretty bad idea even if most contestants were CliqueBots: while TfT variants generally cooperate with each other regardless of slight differences between their source codes, CliqueBots are losing points in defectionful matches even against other CliqueBots except those of their own species.
The Control Group
I run the same experiment with readers of my blog and thus I have a control group to match against the LW competitors. The comparison is done simply by putting all strategies together in an even larger tournament. Before showing the results, let me introduce the control group strategies.
C1: Let p be the relative frequency of opponent's cooperations (after my cooperation if I have cooperated last turn / after my defection if I have defected). If the relative frequency can't be calculated, let p = 1/2. Let Ec = 5p - 7 and Ed = 6p - 6. Cooperate with probability exp Ec / (exp Ec + exp Ed).
This strategy was in many respects similar to strategy U: mathematically elegant, but very unsuccessful.
C2: Defect if and only if the opponent had defected more than twice or if it defected on the first turn.
C3: If the opponent played the same move on the last and last but one turn, play that. Else, play the opposite what I have played last turn. On turns 1 and 2 play defect, cooperate.
C4: Cooperate on the first three turns, defect on the last two. Else, cooperate if the opponent has cooperated at least 85% of the time.
C5: Cooperate on the first three turns, defect on the last turn. Else, cooperate with probability equal to the proportion of opponent's cooperations.
C6: Tit-for-tat, but cooperate after the opponent's first defection.
C7: On the first turn play randomly, on turns 2n and 2n+1 play what the opponent played on turn n.
C8: Defect, if the opponent defected on two out of three preceding turns, or if I have defected on the last turn and cooperated on the last but one turn, or if I haven't defected in preceding 10 turns. On turns 20, 40, 60 and 80 cooperate always.
C9: Have two substrategies: 1. tit for two tats and 2. defect after opponent's first defection. Begin with 1. After each tenth turn change the substrategy if the gain in the last ten turns was between 16 and 34 points. If the substrategy is changed, reset all defection counters.
C10: Turns 1 and 2: cooperate. Turns 3-29: tit for tat. Turns 30-84: defect if the opponent defected more than seven times, else tit for tat. Turn 85: defect. Turns 86 and 87: defect if the opponent defected more than seven times, else cooperate. Turns 88-97: If the opponent defected on turn 87 or more than four times during the match, defect. Else cooperate. Turns 98-100: defect.
C11: Turn 1: cooperate. Turn 2: tit for tat. Turn 100: defect. Else, if the opponent defected on the last and (I have cooperated or the opponent has defected on the last but one turn) defect, else cooperate.
Two round-robin tournament standings are given in the following tables (two tournaments were played to illustrate the rôle of randomness).
You can see how the new strategies had changed the standings. The original winner, I, now hovers around rather unremarkable 15th position, 10th among LW strategies. The average score for the LW strategies (calculated from the second tournament) is 9702.9, for the control group only 9296.9. How much this means that reading LW improves game-theoretical skills I leave to the readers' judgement.
In the evolutionary setting the strategy I recovered back its leading position. Here are the 100th generation populations and the graph:
(In the graph, LW strategies are green and the control group strategies are red.)
It is relatively clear that the situation after hudred generations is not equilibrium and further changes would ensue if the simulation continued. Therefore I have run one more simulation, this time lasting for thousand of generations. This was more interesting, since more strategies died out. The first was C10 which was gone after generation 125. Around generation 300 there were only four surviving strategies from the control group (two of them moribund) and twelve from the original LW pool (two of them seriously endangered, too.) The leaders were I (with population 539), C4 (432) and C11 (183). The last one went through a steady slow decline and was soon overtaken by O, which assumed the third position around generation 420. At generation 600, only three strategies remained in the pool: I (932), C4 (692) and O (374). Not long before that the populations of I and C4 peaked and started to decline, while O continued to rise. In direct confrontation, O always beats both I and C4 397:390 which proved to be decisive when other strategies were eliminated. After the thousandth generation, there were 1374 copies of O and only 625 copies of both its competitors combined.
Final Remarks
I have organised the prisoner's dilemma tournament mainly for fun, but hopefully the results can illustrate few facts about non-zero sum games. At least in the final simulation we can see how short-term success needn't imply long-term survival. The failure of CliqueBots shows that it isn't much prudent to be too restrictive about whom one cooperates with. There is one more lesson that I have learned: the strategies which I considered most beautiful (U and C1) played poorly and were the first to be eliminated from the pool. Both U and C1 tried to experiment with the opponent's behaviour and use the results to construct a working model thereof. But that didn't work in this setting: while those strategies were losing points experimenting, dumb mechanical tit-for-tats were maximising their gain. There are situations when the cost of obtaining knowledge is higher than the knowledge is worth, and this was one of such situations. (Of course, both discussed strategies could have used the knowledge they had more efficiently, but that wouldn't significantly help.)
I have deliberately run fixed-length matches to make the competition more interesting. Random-length iterated PDs are domain of tit-for-tats and it would be hard to devise a better strategy; in the fixed-length case the players had to think about the correct turn when to defect first when playing a nice strategy and the answer isn't trivial. The most successful strategies started defecting on the 98th or 99th turns.
The question whether reading LessWrong improves performance in similar games remains unsolved as the evidence gathered in the tournament was very weak. The difference between the LW and control groups wasn't big. Other problem is that I don't know much about the control group authors (except one of them whom I know personally); it's not clear what part of population the control group represents. The judgement is more complicated by the apparent facts that not all participants were trying to win: the nameless author of A has explicitly declared a different goal (namely, to punish defectorish strategies) and it's likely that L was intended to signal author's cynicism (I am specifically thinking about one of the four DefectBot's originators) rather than to succeed in the tournament. Furthermore, many strategies were send by new users and only 11 out of 26 strategy authors have karma over 500, which means that the LW group doesn't faithfully represent LessWrong active participants.
1 Evolutionary may be a misnomer. The tournament models natural selection, but no changes and therefore evolution occurs. I have kept the designation to maintain consistency with the earlier post.