Non-cooperative game theory, as exemplified by the Prisoner’s Dilemma and commonly referred to by just "game theory", is well known in this community. But cooperative game theory seems to be much less well known.  Personally, I had barely heard of it until a few weeks ago. Here’s my attempt to give a taste of what cooperative game theory is about, so you can decide whether it might be worth your while to learn more about it.

The example I’ll use is the fair division of black-hole negentropy. It seems likely that for an advanced civilization, the main constraining resource in the universe is negentropy. Every useful activity increases entropy, and since entropy of the universe as a whole never decreases, the excess entropy produced by civilization has to be dumped somewhere. A black hole is the only physical system we know whose entropy grows quadratically with its mass, which makes it ideal as an entropy dump. (See http://weidai.com/black-holes.txt where I go into a bit more detail about this idea.)

Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)2 bits of total entropy over its entire history, and the only way to reach that maximum is for every individual to cooperate and eventually contribute his or her matter into a common black hole. A natural question arises: what is a fair division of the (∑ mi)2 bits of negentropy among the individual matter owners?

Fortunately, Cooperative Game Theory provides a solution, known as the Shapley Value. There are other proposed solutions, but the Shapley Value is well accepted due to its desirable properties such as “symmetry” and “additivity”. Instead of going into the theory, I’ll just show you how it works. The idea is, we take a sequence of players, and consider the marginal contribution of each player to the total value as he or she joins the coalition in that sequence. Each player is given an allocation equal to his or her average marginal contribution over all possible sequences.

So in the black-hole negentropy game, suppose there are two players, Alice and Bob, with masses A and B. There are then two possible sequences, {Alice, Bob} and {Bob, Alice}. In {Alice, Bob}, Alice’s marginal contribution is just A2. When Bob joins, the total value becomes (A+B)2, so his marginal contribution is (A+B)2-A2 = B2+2AB. Similarly, in {Bob, Alice}, Bob’s MC is B2, and Alice’s is A2+2AB. Alice’s average marginal contribution, and hence her allocation, is therefore A2+AB, and Bob’s is B2+AB.

What happens when there are N players? The math is not hard to work out, and the result is that player i gets an allocation equal to mi (m1 + m2 + … + mN). Seems fair, right?

ETA: At this point, the interested reader can pursue two paths to additional knowledge. You can learn more about the rest of cooperative game theory, or compare other approaches to the problem of fair division, for example welfarist and voting-based. Unfortunately, I don't know of a good online resource or textbook for systematically learning cooperative game theory. If anyone does, please leave a comment. For the latter path, a good book is Hervé Moulin's Fair Division and Collective Welfare, which includes a detailed discussion of the Shapley Value in chapter 5.

ETA2: I found that a website of Martin J. Osborne and Ariel Rubinstein offers their game theory textbook for free (after registration), and it contains several chapters on cooperative game theory. The site also has several other books that might be of relevance to this community. A more comprehensive textbook on cooperative game theory seems to be Introduction to the Theory of Cooperative Games. A good reference is Handbook of Game Theory with Economic Applications.

New Comment
37 comments, sorted by Click to highlight new comments since: Today at 9:52 AM

I was really confused by this article, so I looked up Shapley value and re-thought the example. This is repetitive, but maybe someone will find it useful.

Society wants to dump matter in a black hole. It (society as a whole) gets a payoff of (total matter dumped)^2 for doing so. If Alice agrees to dump 3 kg, the payoff is 9. Then Bob agrees to dump 2 kg, and the total societal payoff jumps to 25.

In some sense Alice contributed 9 and Bob 16, but it doesn't seem fair to reward them that way. We don't want order of contribution to matter that much. So you imagine doing it in the opposite order. In that case, Alice's marginal contribution is 21 and Bob's is 4.

Then you average over all possible orderings. In this example, Alice gets (9+21)/2 = 15, and Bob gets (16 + 4)/2 = 10.

Right, except it's not that society wants to dump matter in black holes. We just compute the entropy of a black hole with all the mass dumped into it because that's an upper bound on how much negentropy society originally had.

But cooperative game theory seems to be much less well known.

I found a paper that explains why:

So why, despite these advantages, is cooperative game theory currently dominated by noncooperative theory as applied to economics? Perhaps one answer is that the characteristic function, by assumption, rules out externalities—situations in which a coalition’s payoff depends on what other coalitions are doing. Yet, interactions between coalitions are at the very heart of economics, e.g., bargaining between unions and management, competition between companies, and trade between nations.

One, that is an interesting paper and I am considering hunting for other papers which cite it.

Two, I greatly appreciate that you came back to a 10y post to provide updates. Strong upvote.

I would also advise that you copy from Wikipedia some of the basic criteria satisfied (only) by Shapley.

The black hole example may be the obvious one to a certain kind of mind, but one should perhaps point out that this is also a unique way of dividing value that could apply to e.g. a dynamically formed corporation with many internal projects in the absence of top-down management division, and that if you lacked this unique way of dividing value, such a distributed corporation might fail due to disputes over how to divide value in which every agent thinks of themselves as the "critical final ingredient" which creates most of the value and deserves most of the money.

I instantly promoted this post based on its obvious extreme importance, but given the current rating, you might need further exposition to get beyond the "obvious to Eliezer" level.

That applies to a complex organization, where contributions are dissimilar, but my (admittedly limited) understanding of the physics involved suggests that black holes are not picky eaters. Mass is mass.

Given that the mass of the hole itself can be measured, as well as the mass of any given contribution, wouldn't it be possible to treat a domesticated singularity as a publicly-traded corporation, awarding shares in the total negentropy extracted proportional to the initial contributions?

That applies to a complex organization, where contributions are dissimilar, but my (admittedly limited) understanding of the physics involved suggests that black holes are not picky eaters. Mass is mass.

See No-hair theorem.

Personally I like the black hole example. At least it's realistic by science fiction standards, which is more than you can say for the omniscient deities other people have used for their scenarios :-)

The procedure for "fair division" is presented as more or less arbitrary. At least some rationale should be given explicitly. Black holes weaken the post; I think it'll be better if you take them out right away and rename the post (links won't break).

You're the second person to not like to black hole example. It happens to be something I was thinking about, and found a solution for in cooperative game theory. Maybe it's not ideal for pedagogy, but it does have the advantage of being a "real-world" example.

For other examples, as well as the theoretical considerations that led to Shapley Value, see chapter 5 of the book I linked to, or the Wikipedia entry on Shapley Value.

it does have the advantage of being a "real-world" example.

Harvesting negentropy by dropping mass into black holes is not a real world example, it is a far future sci-fi example.

I'd have to disagree with that. It's a far-future science example and an obvious default one given the current model of physics for anyone who understands thermodynamics. Okay, it may take some explaining before it's equally obvious to everyone, but it is, in fact, obvious. Especially if you want value that scales nonlinearly with material. I find it difficult to put into mere words just how obvious this is - it far exceeds the obviousness of, say, using gold to talk about inflation, when the choice of those particular atoms is completely arbitrary in a grand physical sense.

Yes, in some sense it's an obvious default, which seems to go largely unrecognized (even by those who understand thermodynamics), maybe due to a bias towards thinking that value scales linearly with material. But I don't want to claim too much. There are a number of caveats I didn't mention in my post:

  • Some space-time geometries may have better entropy dumps than black holes. In an open universe without dark energy, for example, the cosmological background temperature goes to 0 and negentropy is essentially infinite.
  • Why make negentropy the object of fair division, instead of value created from using up negentropy, which might not be a linear function of it?
  • Why should individuals own matter? If they don't, then our intuition about what constitutes fair division would change drastically.

may take some explaining before it's equally obvious to everyone

That doesn't seem like a partly useful meaning of "obvious."

The real issue is that most people don't believe in the future. Do you want that as a prereq for game theory? does it have positive propaganda value? (I'd guess that it has negative propaganda value, but I'd also guess that the people complaining are incorrect about how distracting it is.)

Beat me to it. Link.

See also: "Trivial".

My intuitive proposal for M.i's negentropy is to take a fraction of the total entropy equal to M.i's fraction of the total mass, and I see this is equal to the Shapley value. Does this continue to be the case for other negentropy(mass) functions?

(Regardless, Shapley's algorithm seems more fair than mine.)

That's a good question, and the answer turns out to be no. For example, suppose negentropy(mass) = mass^3 instead of mass^2. Then in sequence {Alice, Bob}, Alice's MC = A^3 and Bob's MC = 3 A^2 B + 3 A B^2 + B^3. Average MC for Alice is A^3 + 3/2 A^2 B + 3/2 A B^2.

In the proportional approach you suggest, Alice's allocation would be (A+B)^3 * A/(A+B) = A (A+B)^2 = A^3 + 2 A^2 B + A B^2.

The difference is troubling, and I'll have to think about what is going on.

First, by phrasing the question as negentropy(mass) you have assumed we're talking about agents contributing some quantities of a fungible good, and a transform on the total of the good which yields utility. Proportional allocation doesn't make any sense at all (AFAICT) if the contributions aren't of a fungible good.

But let's take those assumptions and run with them. Alice and Bob will contribute A and B of a fungible good. The total contribution will be A+B, which will yield u(A+B) utility. How much of the utility should be credited to Alice, and how much to Bob?

Shapely says Alice's share of the credit should be u(A)/2 + u(A+B)/2 - u(B)/2. Proportional allocation says instead u(A+B) A/(A+B). When precisely are these equal? algebra*

[u(A) - u(B)] / u(A+B) = (A - B) / (A + B)

Well now that's a lot of structure! I'm having trouble phrasing it as a simple heuristic that resonates intuitively with me... I don't feel like I understand as a matter of some more basic principle why proportional allocation only makes sense when this holds. Can anyone help me out by translating the functional equation into some pithy English?

Wait, maybe this will help: Alice, Bob, and Eve.

2/6 u(A) + 1/6 (u(A+B) - u(B)) + 1/6 (u(A+E) - U(E)) + 2/6 (u(A+B+E) - u(B+E)) = u(A+B+E) * A / (A+B+E)

algebra

[(2uA - 2uBE) + (uAB - uE) + (uAE - uB)] / uABE = [(2A - BE) + (A - E) + (A - B)] / ABE

Is this the best structure for understanding it? I'm not sure, 'cause I still don't intuit what's going on, but it seems pretty promising to me.

(Edited to rearrange equations and add:) If we want proportional allocation to work, then comparing the difference in contributions between any two agents to the magnitude of the total contribution should be the same as comparing the difference in utility derivable from the agents alone to the magnitude of the total utility derivable from the agents together.

Sounds pretty but I'm not sure I intuit why it rather than another principle should hold here.

Differentiating your first equation by B at B=0 after rearranging the terms a bit, we get a differential equation:

=2u(x)/x-u'(0))

Solving the equation using the first method from this page yields

=C_1x+C_2x^2)

This gives the necessary and sufficient condition for the Shapley value to coincide with proportional allocation for two participants. The result also holds for three or more participants because the Shapley value is linear with respect to the utility function. So yeah, Wei Dai just got lucky with that example.

(I'd nearly forgotten how to do this stuff. Thanks for the exercise!)

Good work. :)

We're left with the question: for a non-linear and non-quadratic utility function, what is fairer, proportional allocation, or Shapley Value? Among the fairness properties satisfied by Shapley Value, it appears that proportional allocation doesn't satisfy "Equal Impact", defined as follows (from page 161 of Moulin's book):

The impact of removing agent j on agent i ’s share is the same as that of removing agent i on agent j ’s share

With the cubic example, Bob's impact on Alice with Shapley Value is 3/2 A^2 B + 3/2 A B^2, and with proportional allocation it's 2 A^2 B + A B^2. So, is Equal Impact essential to fairness? I'm not sure...

I'm gonna ramble a bit to clear up my own thoughts, apologies if this sounds obvious...

The Shapley value is the only value operator defined on all coalitional games that satisfies some intuitive axioms. (They're all very natural and don't include "Equal Impact".) Proportional allocation isn't defined on all coalitional games, only a subset of them where you arbitrarily chose some numbers as players' "contributions". (A general coalitional game doesn't come with that structure - it's just a set of 2^N numbers that specifies the payoff for each possible cooperating clique.) After this arbitrary step, proportional allocation does seem to satisfy the same natural axioms that the Shapley value does. But you can't expand it to all coalitional games coherently, because otherwise the intuitive axioms would force your "contribution" values to become Shapley values.

In general, I see no reason to use proportional allocation over the Shapley value. If each player suffers a loss of utility proportional to their individual contribution, or any other side effect with an arbitrary cost function, just include it in the SV calculation.

Ok, I think I can articulate a reason for my doubt about the Shapley Value. One nice property for a fair division method to have is that the players can't game the system by transferring their underlying contributions to one another. That is, Alice and Eve shouldn't be able to increase their total negentropy allocation (at Bob's expense) by transferring matter from one to the other ahead of time. Proportional allocation satisfies this property, but Shapley Value doesn't (unless it happens to coincide with proportional allocation).

If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it's straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.

This still isn't very satisfying. John Nash would have advised us to model the situation with free transfer as a game within some larger class of games and apply some general concept like the Shapley value to make the answer pop out. But I'm not yet sure how to do that.

If such transfer of resources is allowed, your share of negentropy must depend only on your contribution, the total contribution and the number of players. If we further assume that zero contribution implies zero share, it's straightforward to prove (by division in half, etc.) that proportional allocation is the only possible scheme.

One difficulty with proportional allocation is deciding how to measure contributions. Do you divide proportionately to mass contributed, or do you divide proportionately to negentropy contributed?

In Shapley value, a coalition of Alice and Eve against Bob is given equal weight with the other two possible two-against-one coalitions. Yes, Shapley does permit 'gaming; in ways that proportional allocation does not, but it treats all possible 'gamings' (or coalition structures) equally.

According to Moulin, there are several different sets of axioms that can be used to uniquely derive the Shapley Value, and Equal Impact is among them (it can be used to derive Shapley Value by itself, if I understand correctly).

The problem with all of those sets of axioms is that each set seems to include at least one axiom that isn't completely intuitive. For example, using the terminology in the Wikipedia article, we can use Symmetry, Additivity and Null Player, and while Symmetry and Null Player seem perfectly reasonable, I'm not so sure about Additivity.

Neat. So if utility is linear or quadratic, the Shapley value is proportional.

If you want to talk about future societies creating black holes as entropy sinks, fine. But if you want to talk about game theory, you can't talk about black holes at the same time without confusing things very badly.

I updated this post with references to a couple of textbooks on cooperative game theory.

In an article like this, it might help to say what the terms "cooperative game theory" and "non-cooperative game theory" mean - or at least provide a reference. These are esoteric technical terms with a counter-intuitive meaning.

I assume that the reader is already familiar with "non-cooperative game theory". It's what everyone already knows, usually just by the name "game theory". The typical definition of "cooperative game theory" isn't very helpful for understanding the difference between the two, which is why I'm providing an example instead.

But here's Wikipedia's definition of "cooperative game":

A cooperative game is a game where groups of players ("coalitions") may enforce cooperative behaviour, hence the game is a competition between coalitions of players, rather than between individual players. An example is a coordination game, when players choose the strategies by a consensus decision-making process.

A single example alone doesn't usually do much to deliniate the set it is part of.

If I show you something from my bag, that doesn't give you much of a clue what else is in it.

The example given here gives me the impressions that 'cooperative game theory' is just non-cooperative game theory where each agent is a coalition rather than an individual. Unless this is the intended meaning I suggest the definition is misleading.

[-][anonymous]15y-10

Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)^2 bits of total entropy over its entire history

Typo: (∑ mi)^2 should be mi^2

No, that's not a typo... it's supposed to be the square of the sum of the individual masses.