This result should be easy to extend to games with more than two players
I suspect that there are difficulties when more than two players are involved in the decision making, and a coalition between a majority can leave a minority powerless.
For example, suppose that three players iteratively play a game in which they vote as to how to divide up a jackpot. Four choices:
Majority rule. If no proposed split receives a majority, then nobody gets anything.
There are three stable bargaining solutions - each of which is pareto optimal but forces one of your θ factors to be zero. There needs to be some added structure to the problem to force the three-way split as the unique solution. As I described it, the three-way split is not even a solution.
ETA: In the 3-way split option, assume that Omega increases the jackpot by 5%, so that total payoff is maximized by the split. But it is still better to split two ways, if you can find a partner.
Yes coallitions and things like that can happen. But the result still applies - every Pareto optimal solution is a weighing of utilites and a maximising of the sum. They don't have to be fair or nice, though.
If the players all have utilities that scale linearly with money for the jackpot, then all four of your choices correspond to valuing everyone's utilities equally and maximising the sum; only because the solution space is a flat plane, we need an extra "tie-break" method for deciding which to go for.
every Pareto optimal solution is a weighing of utilities and a maximizing of the sum.
all four of your choices correspond to valuing everyone's utilities equally and maximizing the sum;
Not true after my edit. The three-way split is the unique maximum when all players are valued equally; but the two way splits are also Pareto optimal, are each favored by a majority of players, and are bargaining solutions (for some bargaining protocols).
But, now that I look at it, these two-way splits are also scaled utility maxima - for example, using a (49, 49, 2) scaling.
You have me convinced that every point on the Pareto frontier is a maximum of some weighted sum of utilities. But you don't yet have me convinced that, in every case, all of the weights can be non-zero.
ETA: I am a bit more convinced after re-reading the last paragraph of your posting.
No, you are right. For certain setups (say when the set of outcomes is a perfect circle/sphere), then if one player gets everything they want, this only happens if the weight of the other player's utility is zero.
I didn't write up that case, because there I would have to let the thetas be zero and infinity, a subtlety that would distract from the main point.
Two concerns.
First, and I don't mean this disrespectfully, why should we care? Trying to generate pareto optimality through a series of games played by randomly but fully paired-off players, with no potential for trade, doesn't resemble anything I can think of. This is particularly true because players of games have little interest in pareto optimality; the very concept of "game" implies "competition," and the concept of "pareto optimality" implies "cooperation." Indeed, this assumes that trade is impossible, or else making game outcomes pareto optimal is redundant. While I may in some sense prefer pareto optimal outcomes for games, what most players are assumed to care about is maximizing their own utility, which is largely unrelated to pareto optimality. Perhaps my imagination is insufficient, but this doesn't seem applicable to anything.
Second, the =1 seems arbitrary, because there are no units. So long as none of the factors are zero, the product will be some number that could be scaled to 1. I don't think undermines your point; it just seems odd. FUrthermore, it's interesting to note that there is always a consistent scale in which the product equals zero, even if there is another where it equals 1. I'm not sure if this is significant; again it just seems odd..
Who said there is no potential for trade? Each game could be exactly that - a trade - where the "game" is simply dividing up the gain from trade.
The product of the three theta cannot be scaled. First of all, the theta are unitless (as a ratio between two utility functions), so you can't scale them by changing their units. You can change individual thetas by scaling the u1, u2 or u3, but this will not change their product: scaling u1, for instance, will scale θ12 one way and θ31 the opposite way, cancelling out. The =1 is well defined.
If trade is permitted, and there are no transaction costs, all outcomes are pareto optimal. That's what I was objecting to.
I think you might be worried more about maximizing expected (or average) utility than pareto optimization. The two are largely unrelated.
Again, I may be misunderstanding something, but the requirement that there be a universal scale for utility seems to require that individuals be interchangeable, if the scale is dependent on observable characteristics (e.g. someone who has $200 has 2 utils of happiness). Unless this scale is somehow contingent on measuring subjective utility, you're saying that pareto optimality should be possible only when players have an agreed-upon set of preferences.
Even if I'm confused on that point, I still fail to see how this hypothetical relates to anything in reality.
If people are Pareto optimal in a trade, then yes, that trade is Pareto optimal.
However if there are three trades between three people (12, 23 and 31) and each trade is Pareto-optimal, that does not mean that the set of three trades as a whole is Pareto-optimal. It's generally possible to change these trades and get a better outcome for everyone (which is kinda what a market with price signals tries to do).
The only cases where you cannot do this is precisely where there is an agreed upon weighting of everyone's utility function etc...
The individuals need not be interchangeable, the weighting need not be fair. But Pareto-optimality is only possible in this situation (when the outcome set is smooth).
If we assume that all theoretical choices are possible with positive probability, then the set should be, in fact, smooth, or at least differentiable, as it would be impossible for two different weightings to produce the same exact outcome. This makes the result nicer.
I proved an essentially analogous result, viewing it as a justification of utilitarianism.
I think the fact that Pareto is equivalent to utility-weighting is well-known in economics. It's one of the consequences of the separating hyperplane theorem. I have not seen it applied to bargaining, because economists mostly think in terms of actual rather than optimal bargaining. Perhaps it has been, though.
Rather, it is to decide in advance how much your respective utilities are worth relative to each other...
How would the players go about this, exactly? It sounds to me like you are assuming a can opener.
tldr: In a population playing independent two-player games, Pareto-optimal outcomes are only possible if there is an agreed universal scale of value relating each players' utility, and the players then acts to maximise the scaled sum of all utilities.
In a previous post, I showed that if you are about the play a bargaining game with someone when the game's rules are initially unknown, then the best plan is not to settle on a standard result like the Nash Bargaining Solution or the Kalai-Smorodinsky Bargaining Solution (see this post). Rather, it is to decide in advance how much your respective utilities are worth relative to each other, and then maximise their sum. Specifically, if you both have (representatives of) utility functions u1 and u2, then you must pick a θ>0 and maximise u1+θu2 (with certain extra measures to break ties). This result also applies if the players are to play a series of known independent games in sequence. But how does this extend to more than two players?
Consider the case where there are three players (named imaginatively 1, 2 and 3), and that they are going to pair off in each of the possible pairs (12, 23 and 31) and each play a game. The utility gains from each game are presumed to be independent. Then each of the pairs will choose factors θ12, θ23 and θ31, and seek to maximise u1+θ12u2, u2+θ23u3 and u3+θ31u1 respectively. Note here that I am neglecting tie-breaking and such; the formal definitions needed will be given in the proof section.
A very interesting situation comes up when θ12θ23θ31=1. In that case, there is an universal scale of "worth" for each of the utilities: it's as if the three utilities are pounds, dollars and euros. Once you know the exchange rate from pounds to dollars (θ12), and from dollars to euros (θ23), then you know the exchange rate from euros to pounds (θ31=1/(θ12θ23)). We'll call these situations transitive.
Ideally we'd want the outcomes to be Pareto-optimal for the three utilities. Then the major result is:
The outcome utilities are Pareto-optimal if and only if the θ are transitive.
What if there are not three, but hundreds, or thousands of players, playing games with each other? If we assume that the utilities derived from each game is independent, then we get a similar result: given a sequence of players 1, 2,..., n such that each plays a game with the next player and n and 1 also play a game, then we still get:
The outcome utilities are Pareto-optimal if and only if the θ are transitive (ie θ12θ23...θn-1nθn1=1).
What this means is the Pareto-optimal outcomes are only possible if there is a universal (linear) scale of value relating all the utilities of any player linked to another by direct or indirect trade links. The only important factor is the weight of your utility in the crowd. This result should be easy to extend to games with more than two players, but that's beyond the scope of this post.
The rest of this post is a proof of the result, and may be skipped by those not of a technical bent.
Proof:
Apologies if this result is known or trivial; I haven't seen if before myself.
Here, we will always assume that the set possible outcomes is convex (mixed solutions are always possible) and that the Pareto-optimal outcome set is smooth (no corners) and contains no straight line segments. What these conditions mean is that if O is the set of Pareto-optimal outcomes, then each point in O has a well defined tangent and normal vector. And further, the slope of the tangent never stays constant as we move around O. Because O is the upper-right boundary of a convex set, this means that each point in O is uniquely determined by the slope of its tangent vector - equivalently, by the slope of its normal vector. The normal vector is particularly useful, for the point in O that maximises the utility u1+θu2 is the point that has normal vector (1, θ). This means that each point in O is uniquely determined by the value of θ. This is illustrated in the following diagram. Here θ=2/3, the purple set is the set of possible outcomes, the blue lines are the sets of constant x+(2/3)y, and the red normal vector (1, 2/3) is drawn from the maximising outcome point (1, 2.5):
Now assume we have θ12, θ23 and θ31 as above. The utility outcomes for the three games (given by maximising u1+θ12u2, u2+θ23u3 and u3+θ31u1) will be designated o12=(x1,y2), o23=(x2,y3) and o31=(x3,y1), with the index corresponding to the players.
We now consider a change to the utilities by adding the vectors v12=(x12,y12), v23 and v31 to each of these outcomes. We want the changes to be Pareto-optimal, and to remain within the set of possible outcomes for each game. The first condition can be phrased as requiring that the changes to each utility be positive; i.e. that x12+y31 (the change to player 1's utility) x23+y12 and x31+y23 all be positive. The second condition requires that o12+v12 is not to the right of the natural tangent line through o12. Since the normal to this tangent line is (1,θ12), this is equivalent with requiring that the dot product of v12 and (1,θ12) be negative, hence that x12+θ12y12, x23+θ23y23 and x31+θ31y31 all be negative.
Then using all six inequalities above, we can see that:
x12 ≤ -θ12(y12) ≤ θ12(x23) ≤ -θ12θ23(y23) ≤ θ12θ23(x31) ≤ -θ12θ23θ31(y31) ≤ θ12θ23θ31(x12)
Now if the change is actually going to be a Pareto-improvement, one of the inequalities will have to be a strict inequality, giving x12 < θ12θ23θ31(x12). This is obviously impossible when θ12θ23θ31 = 1, so in that circumstance, there cannot be any Pareto-improvements. This demonstrates half of the implication.
Now let T be the sixth root of θ12θ23θ31. If θ12θ23θ31 > 1 (equivalent with T > 1), then the following vectors (strictly) obey all the above inequalities:
v12=(θ12θ23θ31 ε, -Tθ23θ31 ε), v23=(T2θ23θ31 ε, -T3θ31 ε), v31=(T4θ31ε, -T5ε).
Since these obey the inequalities strictly and the outcome sets are smooth, for sufficiently small ε, these provide a Pareto optimal improvement. If θ12θ23θ31 < 1 (equivalent with T < 1), then the following vectors work in the same way:
v12=(-θ12θ23θ31 ε, Tθ23θ31 ε), v23=(-T2θ23θ31 ε, T3θ31 ε), v31=(-T4θ31ε, T5ε).
This proof can be extended to n players in a similar fashion.
When the outcome sets contains straight line segments, we need a tie breaking system for cases when the "utility indifference lines" are parallel to these segments; apart from that, the result goes through. If the outcome sets contain corners, it remains true that there are no Pareto-optimal improvements when θ12θ23θ31=1. But it is possible to have θ12θ23θ31≠1 and be Pareto-optimal as long as the outcomes are on a corner. However, in these circumstances, there will always be values θ'12, θ'23 and θ'31, giving the same outcomes as θ12, θ23 and θ31 and with θ'12θ'23θ'31=1. This can be seen by replacing the corner with a series of limiting smooth curves.