Non-cooperative game theory, as exemplified by the Prisoner’s Dilemma and commonly referred to by just "game theory", is well known in this community. But cooperative game theory seems to be much less well known. Personally, I had barely heard of it until a few weeks ago. Here’s my attempt to give a taste of what cooperative game theory is about, so you can decide whether it might be worth your while to learn more about it.
The example I’ll use is the fair division of black-hole negentropy. It seems likely that for an advanced civilization, the main constraining resource in the universe is negentropy. Every useful activity increases entropy, and since entropy of the universe as a whole never decreases, the excess entropy produced by civilization has to be dumped somewhere. A black hole is the only physical system we know whose entropy grows quadratically with its mass, which makes it ideal as an entropy dump. (See http://weidai.com/black-holes.txt where I go into a bit more detail about this idea.)
Let’s say there is a civilization consisting of a number of individuals, each the owner of some matter with mass mi. They know that their civilization can’t produce more than (∑ mi)2 bits of total entropy over its entire history, and the only way to reach that maximum is for every individual to cooperate and eventually contribute his or her matter into a common black hole. A natural question arises: what is a fair division of the (∑ mi)2 bits of negentropy among the individual matter owners?
Fortunately, Cooperative Game Theory provides a solution, known as the Shapley Value. There are other proposed solutions, but the Shapley Value is well accepted due to its desirable properties such as “symmetry” and “additivity”. Instead of going into the theory, I’ll just show you how it works. The idea is, we take a sequence of players, and consider the marginal contribution of each player to the total value as he or she joins the coalition in that sequence. Each player is given an allocation equal to his or her average marginal contribution over all possible sequences.
So in the black-hole negentropy game, suppose there are two players, Alice and Bob, with masses A and B. There are then two possible sequences, {Alice, Bob} and {Bob, Alice}. In {Alice, Bob}, Alice’s marginal contribution is just A2. When Bob joins, the total value becomes (A+B)2, so his marginal contribution is (A+B)2-A2 = B2+2AB. Similarly, in {Bob, Alice}, Bob’s MC is B2, and Alice’s is A2+2AB. Alice’s average marginal contribution, and hence her allocation, is therefore A2+AB, and Bob’s is B2+AB.
What happens when there are N players? The math is not hard to work out, and the result is that player i gets an allocation equal to mi (m1 + m2 + … + mN). Seems fair, right?
ETA: At this point, the interested reader can pursue two paths to additional knowledge. You can learn more about the rest of cooperative game theory, or compare other approaches to the problem of fair division, for example welfarist and voting-based. Unfortunately, I don't know of a good online resource or textbook for systematically learning cooperative game theory. If anyone does, please leave a comment. For the latter path, a good book is Hervé Moulin's Fair Division and Collective Welfare, which includes a detailed discussion of the Shapley Value in chapter 5.
ETA2: I found that a website of Martin J. Osborne and Ariel Rubinstein offers their game theory textbook for free (after registration), and it contains several chapters on cooperative game theory. The site also has several other books that might be of relevance to this community. A more comprehensive textbook on cooperative game theory seems to be Introduction to the Theory of Cooperative Games. A good reference is Handbook of Game Theory with Economic Applications.
First, by phrasing the question as negentropy(mass) you have assumed we're talking about agents contributing some quantities of a fungible good, and a transform on the total of the good which yields utility. Proportional allocation doesn't make any sense at all (AFAICT) if the contributions aren't of a fungible good.
But let's take those assumptions and run with them. Alice and Bob will contribute A and B of a fungible good. The total contribution will be A+B, which will yield u(A+B) utility. How much of the utility should be credited to Alice, and how much to Bob?
Shapely says Alice's share of the credit should be u(A)/2 + u(A+B)/2 - u(B)/2. Proportional allocation says instead u(A+B) A/(A+B). When precisely are these equal? algebra*
[u(A) - u(B)] / u(A+B) = (A - B) / (A + B)
Well now that's a lot of structure! I'm having trouble phrasing it as a simple heuristic that resonates intuitively with me... I don't feel like I understand as a matter of some more basic principle why proportional allocation only makes sense when this holds. Can anyone help me out by translating the functional equation into some pithy English?
Wait, maybe this will help: Alice, Bob, and Eve.
2/6 u(A) + 1/6 (u(A+B) - u(B)) + 1/6 (u(A+E) - U(E)) + 2/6 (u(A+B+E) - u(B+E)) = u(A+B+E) * A / (A+B+E)
algebra
[(2uA - 2uBE) + (uAB - uE) + (uAE - uB)] / uABE = [(2A - BE) + (A - E) + (A - B)] / ABE
Is this the best structure for understanding it? I'm not sure, 'cause I still don't intuit what's going on, but it seems pretty promising to me.
(Edited to rearrange equations and add:) If we want proportional allocation to work, then comparing the difference in contributions between any two agents to the magnitude of the total contribution should be the same as comparing the difference in utility derivable from the agents alone to the magnitude of the total utility derivable from the agents together.
Sounds pretty but I'm not sure I intuit why it rather than another principle should hold here.
Differentiating your first equation by B at B=0 after rearranging the terms a bit, we get a differential equation:
Solving the equation using the first method from this page yields
This gives the necessary and sufficient condition for the Shapley value to coincide with proportional allocation for two participants. The result also holds for three or more participants because the Shapley value is linear with respect to the utility function. So yeah, Wei Dai just got lucky with that example.
(I'd nearly forgotten how to do this stuff.... (read more)