(0,0) (1,2) \n (2,1) (0,0)Observe the payoff matrix at right (the unit of reward? Cookies.). Each player wants to play 'A', but only so long as the two players play different moves.

Suppose that Red got to move first. There are some games where moving first is terrible - take Rock Paper Scissors for example. But in this game, moving first is great, because you get to narrow down your opponent's options! If Red goes first, Red picks 'A', and then Blue has to pick 'B' to get a cookie.

This is basically kidnapping. Red has taken all three cookies hostage, and nobody gets any cookies unless Blue agrees to Red's demands for two cookies. Whoever gets to move first plays the kidnapper, and the other player has to decide whether to accede to their ransom demand in exchange for a cookie.

 

What if neither player gets to move before the other, but instead they have their moves revealed at the same time?

Pre-Move Chat:

Red: "I'm going to pick A, you'd better pick B."

Blue: "I don't care what you pick, I'm picking A. You can pick A too if you really want to get 0 cookies."

Red: "Okay I'm really seriously going to pick A. Please pick B."

Blue: "Nah, don't think so. I'll just pick A. You should just pick B."

And so on. They are now playing a game of Chicken. Whoever swerves first is worse off, but if neither of them give in, they crash into each other and die and get no cookies.

 

So, The Question: is it better to play A, or to play B?

This is definitely a trick question, but it can't be too trickish because at some point Red and Blue will have to figure out what to do. So why is it a trick question?

Because this is a two-player game, and whether it's good to play A or not depends on what your opponent will do.


A thought experiment: suppose we threw a party where you could only get dessert (cookies!) by playing this game. At the start, people are unfamiliar with the game, but they recognize that A has higher payoffs than B, so they all pick A all the time. But alas! When both people pick A, neither get anything, so no cookies are distributed. We decide that everyone can play as much as they want until we run out of cookies.

Quite soon, one kind soul decides that they will play B, even though it has a lower payoff. A new round of games is begun, and each person gets a turn to play against our kind altruist. Soon, each other person has won their game, and they have 2 cookies each, while our selfless altruist has just one cookie per match they played. So, er, 11 or so cookies?

Many of the other party-goers are enlightened by this example. They, too, want to be selfless and altruistic so that they can acquire 11 cookies / win at kidnapping. But a funny thing happens as each additional person plays B - the people playing A win two more cookies per round (one round is everyone getting to play everyone else once), and the people playing B win one fewer cookie, since nobody gets cookies when both play B either. Eventually, there are eight people playing A and four people playing B, and all of them nom 8 cookies per round.


It's inevitable that the people playing B eventually get the same number of cookies as the people playing A - if there was a cookie imbalance, then people would switch to the better strategy until cookies were balanced again. Playing A has a higher payoff, but all that really means is that there are eight people playing A and only 4 playing B. It's like B has an ecological niche, and that niche is only of a certain size.

What does the party case say about what Red and Blue should do when playing a one-shot game? The ratios of players turn into probabilities: if you're less than 67% sure the other person will play A, you should play A. If you're more than 67% sure, you should play B. This plan only works for situations similar to drawing an opponent out of a pool of deterministic players, though.


Stage two of the problem: what if we allow players access to each others' source code?

While you can still have A-players and B-players, you can now have a third strategy, which is to play B against A-players and play A against B-players. This strategy will have a niche size in between playing A and playing B.

What's really great about reading source code, though, is that running into a copy of yourself no longer means duplicate moves and no cookies. The best "A-players" and "B-players" now choose moves against their copies by flipping coins, so that half the time they get at least one cookie. Flipping a coin against a copy of yourself averages 3/4 of a cookie, which is almost good enough to put B-players out of business. In fact, if we'd chosen our payoff matrix to have a bigger reward for playing A, we actually could put B-players out of business. Fun question: is it possible to decrease the total number of cookies won by increasing the reward for playing A?

An interesting issue is how this modification changes the advice for the one-shot game. Our advice against simpler opponents was basically the "predictor" strategy, but that strategy is now in equilibrium with the other two! Good advice now is more like a meta-strategy. If the opponent is likely to be an A-player or a B-player, be a predictor, if the opponent is likely to be a predictor, be an A-player. Now that we've been this cycle before, it should be clearer that this "advice" is really a new strategy that will be introduced when we take the game one meta-level up. The effect on the game is really to introduce gradations of players, where some play A more often and some play B more often, but the populations can be balanced such that each player gets the same average reward.


An interesting facet of a competition between predictors is what we might call "stupidity envy" (See ASP). If we use the straightforward algorithm for our predictors (simulate what the opponent will do, then choose the best strategy), then a dumb predictor cannot predict the move of a smarter predictor. This is because the smarter predictor is predicting the dumb one, and you can't predict yourself in less time than you take to run. So the dumber predictor has to use some kind of default move. If its default move is A, then the smarter predictor has no good choice but to take B, and the dumber predictor wins.

It's like the dumber predictor has gotten to move first. Being dumb / moving first isn't always good - imagine having to move first in rock paper scissors - but in games where moving first is better, and even a dumb predictor can see why, it's better to be the dumber predictor.

On our other axis of smartness, though, the "meta-level," more meta usually produces better head-to-head results - yet the humble A-player gets the best results of all. It's only the fact that A-players do poorly against other A-players that allows a diverse ecology on the B-playing side of the spectrum.
New Comment
21 comments, sorted by Click to highlight new comments since:
[-]V_V110

You might want to analyze this game according to standard game theory before trying to reinvent the wheel (and risking to make it square).
In particular, when your proposed strategy involves guessing what the other players do, and this doesn't converge when the other players do the same to you, it is usually a bad sign (sometimes it's not possible to do anything better, but sometime it is).

Let's start with the standard version:

There are two pure strategy Nash equilibria: (A, B) and (B, A). Both are Pareto-optimal, but they benefit the player who played A more than the other, thus they are "unfair", in the sense that they yield asymmetric payoff in a symmetric game.
There is also a mixed strategy Nash equilibrium which is symmetric but sub-optimal. The game is a bargaining problem and a coordination game as well. In particular, it is a variation of the battle of the sexes game.
Unfortunately, there doesn't seem to be any truly satisfactory solution to this game. There is, however, a correlated equilibrium solution. If both players have access to a public random bit generator, they can obtain a Pareto-optimal symmetric payoff equilibrium, assuming that they can communicate to coordinate a strategy (they don't have to make a contract).

Now, let's consider the program-swap variant of the game.

First of all, a remark on the wording:

what if we allow players access to each others' source code?

Players don't have access to each others' source code. Players submit 'bots': programs which have access to each others' source code. The submitted programs are (pure) strategies, not players. I think it is important to always make the distinction clear, otherwise there is a risk of confusion between the decision makers and the decisions themselves, which may lead to flawed proposed solutions.

In the program-swap variant of this game we (players) can achieve something eqiivalent to the correlated equilibrium by using clique-like mixed strategies:

Each player flips a coin and submits either CliqueBot_0 or CliqueBot_1, which are defined as:
def CliqueBot_X(otherBot):
if otherBot is not a CliqueBot_X then output 'A'
Y = extract 'X' from otherBot
ID = player id (either 0 or 1)
if X xor Y xor ID then output 'A'
otherwise output 'B'

This achieves a Pareto-optimal symmetric equilibrium against itself and, being a clique, it also achieves stability: defectors from the clique are immediately punished. It also achieves maximum payoff against B-bots and PredictorBots, though it performs poorly against A-bots.

The main issue with this strategy is, of course, that there are infinitely many cliques, each of them doesn't cooperate with the others. This becomes to a "choosing sides" coordination game, which is easier than the original game: you just need a single way communication channel between the players to achieve coordination. Even if there is no communicaton channel, it might possible to achieve coordination using a prior, without lack of convergence.

You might want to analyze this game according to standard game theory before trying to reinvent the wheel (and risking to make it square).

Fair enough,

Players don't have access to each others' source code.

Well... This article is tagged decision_theory, not game_theory. The goal is not merely to explore solutions to this game, the goal is to explore solution-finding strategies to this game, or at least lay the foundations for that.

I think your CliqueBots need the insight behind Emile's hashMe proposal - they each need a unique ID. Otherwise, when CliqueBot_0 plays a copy, "X xor Y xor ID" is 0, so both play B, so they both lose.

[-]V_V00

Well... This article is tagged decision_theory, not game_theory. The goal is not merely to explore solutions to this game, the goal is to explore solution-finding strategies to this game, or at least lay the foundations for that.

Isn't that exactly the topic of game theory?

I think your CliqueBots need the insight behind Emile's hashMe proposal - they each need a unique ID. Otherwise, when CliqueBot_0 plays a copy, "X xor Y xor ID" is 0, so both play B, so they both lose.

Yes, I assumed that the execution environment provides the bots with a binary role ID as an additional (implicit) parameter. If that's a problem you can use a more sophisticated symmetry-breaking scheme, on the lines of Emile's proposal, but I think it is better to maintain cliquing, which is missing from Emile's proposal.

Isn't that exactly the topic of game theory?

Well, you seem to be more expert than I am, so I can just ask you :D Given a small, two-player payoff matrix, what general process outputs the correct move for player 1?

[-]V_V00

I'm not really that expert, but I can try to answer the question anyway.

The short answer is to implement game theory.

Of course, this is easier said than done.
Under the assumption that the other player is at least as rational as you are, you generally want a decision procedure that chooses a strategy which is part of a Nash equilibrium (under mild assumptions, at least one Nash equilibrium always exists). If there are multiple Nash equilibria, as it is often the case, you want your decision procedure to seek for a refinement and hope that the other player does the same.
Ultimately, for some games there is no single satisfactory solution. It could be that this happens because game theory is incomplete or that these games are intrinsically unsolvable dilemmas. In these cases, you might want to fall back to an heuristic (such as randomization or trying to guess the other player decision)

Hm. So I think I reinvented the wheel to moderate success, then. In the blind, no-communication case, you play the bargaining equilibrium. With communication via source code, though, I didn't make my wheel - following the bargaining equilibrium among strategies - very round. This is because you can use your source codes to send two random numbers, which lets you cooperate with the other player. But there are many different possible ways of doing this, each corresponding to a different equilibrium.

To have any chance of cooperating, and thus beating my slightly non-round wheel, you need a prior over opponents to let you narrow down the search for cooperation schemes (probably "pick the simplest"), and you need to randomize over the multiple remaining equilibria. And no cliqueishness here, since we're trying to play blind.

The best "A-players" and "B-players" now choose moves against their copies by flipping coins, so that half the time they get at least one cookie.

Waidaminute - those aren't "A-players" or "B-players" any more, but variants. And if two "A-player variants" vary in a slightly different way, will they recognize each other as such? Doing so is not a trivial problem!

Anyway, my algorithm would be something like:

"Calculate a hash of my code, hashMe, and a hash of my opponent, hashYou. If (hashMe + hashYou) is odd, the player with the highest hash shall be the Chosen One, otherwise it's the player with the lowest hash (if the hashes are identical, find some other unambiguous higher/lower criteria both players share, player IDs or something or lexicographic ordering of source code, if none exists, flip a coin). If I'm the Chosen One, play A, otherwise play B. Also, random string of comments (to ensure different instances of this algorithm have different hash functions)."

This should have an expected value 1.5 against predictors (it can be simulated safely without recursion problems), 0.5 against A-players, 1 against B-players and 0.75 against coin-flippers. And more importantly, it's resistant to exploitation/blackmail, and gets the max utility of 1.5 against itself.

The only problem with this algorithm is that it's not good at coordinating with similar algorithms that have a different "Chosen One" criteria (e.g. "look at the parity of the (hashMe+HashYou)th digit of pi"). That could be fixed, but is considerably more complicated.

Calculate a hash of my code, hashMe, and a hash of my opponent, hashYou. If (hashMe + hashYou) is odd, the player with the highest hash shall be the Chosen One, otherwise it's the player with the lowest hash (if the hashes are identical, find some other unambiguous higher/lower criteria both players share, player IDs or something or lexicographic ordering of source code, if none exists, flip a coin).

This is way too complicated. Instead of ending with "flip a coin" why not just start with it?

Right, so the TDT way to do this would be to diagram the causal structure and only accept as "copies" programs with the same structure.

hashMe does seem to be a good way to access the Pareto optimum that V_V mentions.

[-]V_V00

.. vary in a slightly different way, will they recognize each other as such?

Being strict in recognizing the other is a feature, not a bug. You want to obtain a stable Nash equilibrium.

If you don't recognize your opponent as a copy of yourself because he's written in C++ and not Java, or because he uses iteration and not recursion, then you're acting sub-optimally (unless those things matter of course!).

[-]V_V00

Don't conflate the player with the bot.

In any cliquing strategy, you (the player) want to submit a bot that always defects against a bot that is not functionally equivalent to itself: this is crucial to guarantee the stability of the Nash equilibrium that you hope to reach.

You also want your bot to cooperate with any bot it can prove to be functionally equivalent to itself, and to always compute an answer in a finite time. Due to Rice's theorem , functional equivalence is a semidecidable property, therefore you need to use a stronger, decidable, equivalence relation that underestimates, but never overestimates, functional equivalence.
Textual equality is the most obvious of such relations. You can invent many more weaker equivalence relations that still guarantee functional equivalence, but they would add complexity to your bot and presumbly make coordination with the other player more difficult, therefore it is not an obvious choice to use any of them.

Once you have chosen to use a cliquing strategy, you still have to choose between (infinitely) many cliques, therefore you face a "choosing sides" coordination game with the other player. This a non-trivial problem, but fortunately it is easier than the original game.

[-]gjm30

It's not directly relevant, but here's my favourite fact about Chicken, which I think I found in a book by Steven Brams: If you play a game of Chicken against God (or Omega, or any other entity able to read your mind or otherwise predict your behaviour), God loses. (Because all you have to do is decide not to flinch, and your omniscient opponent knows you will not flinch, and It then has no better option than to flinch and let you win.)

Of course the correct next inference is that Omega either doesn't play Chicken, or cheats (at which point It is in fact no longer playing Chicken but some other game). Seems reasonable enough.

[-][anonymous]00

This makes me curious what happens when two algorithms, both of whom have access to the others source code, both play chicken.

[-]gjm20

Well, obviously the result depends on what the algorithms actually do. If you're going to be playing against an opponent with access to your source code, you'd quite like to be a nice simple "always-stand-firm" program, which even a very stupid opponent can prove will not flinch. But of course that leads to all algorithms doing that, and then everybody dies.

It's not clear that this is avoidable. While it's interesting to think in the abstract about programs analysing one another's code, I think that in practice programs clever enough to do anything nontrivial with one another's code are almost inevitably much too difficult to analyse, and unless there are big disparities in computational resource available simulation isn't much help either. So we're left with doing trivial things with one another's code: e.g., perhaps you could say "if facing an exact copy of myself, flinch with probability p; else stand firm unconditionally", and anything that makes a halfway-serious attempt at analysis will see that it isn't going to win. (What's the best value of p? Depends on the exact payoffs. With classical Chicken where "neither flinch" => "both die", it's probably very close to 1. But then with classical Chicken the best meta-algorithm is not to play the damn game in the first place.)

If you play a game of Chicken against God, God loses.

Does he?

... and great letters of fire appeared in the Sky, visible to half a continent, while a great booming voice read them out loud: "I, The Almighty, Am About To Play Chicken Against gjm. I Will Not Flinch, And Hereby Swear That Should I Do So, I Shall Abdicate My Reign Over The Universe In Favour Of Satan, The Deceiver. I Also Swear That I Have Not And Will Not Read gjm's Mind Until Either The Game Is Over Or He Died And Must Face Judgement."

[-]gjm00

Hmm. Does God even have the option of not reading my mind?

Of course the answer is ill-defined for multiple reasons. My feeling is that the standard LW notion of Omega says that It has mind-reading (or simulating, or ...) apparatus that It can use or not as It pleases, whereas God simply, automatically, ineffably knows everything that can be known. If so, then when I play them at Chicken Omega wins and God loses.

Also: it's part of the definition of the game of Chicken that the neither-flinches outcome is worst for both players. So if God's still playing Chicken even after the precommitment above, then the outcome if I remain unintimidated must be even worse for him than abdicating in favour of Satan. And -- if he truly isn't reading my mind -- that's got to be a real possibility. In which case, for him to play this game at all he'd have to be incredibly stupid. Which isn't supposed to be one of God's attributes.

If so, then when I play them at Chicken Omega wins and God loses.

Further, if I understand things correctly, this mean there are NO games at which god systematically wins.

Note: If you understand up to Stage 1 of this post, you now understand how allele frequencies are determined in a population that has random mating and a few other biology related assumptions. The invisible "Players" choose which allele it is better to be, and they will settle onto an equilibrium as Manfred describes.

The above post contains everything you need to do some intro-level population genetics problems. If you learned something from the above most and would now like to test your shiny new genetics skills, here's a problem set:

Suppose A and B are the only two alleles in a diploid genome, and AA has fitness 2, BB has fitness 1, and AB has fitness 3

Problem: What proportion of alleles are A? What proportion of alleles are B?

Solution

Bonus question: What will the final distribution of AA, AB, and BB prototypes look like?

Solution

In practice for biologists, you'd be doing this problem backwards - looking at the percent of the population that expresses each phenotype, and using that to infer allele frequencies and relative fitness. If I tell you that 25% of a population has sickle cell, if you know that sickle cell disease comes from a homozygous recessive allele, and if you are willing to assume various things, that's all you need to work backwards all the way to the payoff matrix.

In practice, of course, mating isn't random. If AB is inferior to both AA and BB, then there is an incentive for A carriers to avoid B carriers and it is a trend towards speciation. If AB is superior to both AA and BB, there is an incentive for heterozygosity.) Also in practice, the allele frequencies are constantly shifting so you are following a moving target, and sometimes sampling is biased (for example, carriers of low fitness alleles die before you can measure them) etc.

This strikes me as yet another example of the value of being able to precommit in Newcomblike problems. The dumb predictor is playing the role of the box-chooser; the smarter predictor is Omega.

At the meta-level, it seems to me like I aught to use a fair coin flip to decide who gets A and who gets B, and that I further aught to punish any defectors who don't go along with this and try to just pick A and kidnap the cookies.

But this only "wins" to the extent that it reduces the ratio of non-negotiating-A-players, so it only works in the iterated case...