GuySrinivasan comments on If you don't know the name of the game, just tell me what I mean to you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
Your post seems to point out that one can consider mixed coordinated strategies on the global game (where in first round you are told which game you play, and in the second round you play it), with the set of payoffs thus obtained as the convex closure of pure strategy payoffs, in particular payoffs on Pareto frontier of the global game being representable as linear (convex) combination of payoffs on Pareto frontiers of individual games, and in an even more special case, this point applies to any notion of "fair" solution.
The philosophical point seems to be the same as in Counterfactual Mugging: you might want to always follow a strategy you'd (want to) choose before obtaining the knowledge you now possess (with that strategy itself being conditional, and to be used by passing the knowledge you now possess as parameter), in this case applied to knowledge about which game is being played. In other words, try respecting reflective consistency even if "it's already too late".
P.S.
"Isomorphism" (and "between") seems like a very wrong word to use here. Linear combination of two utilities, perhaps.
I feel like mu isn't the really important part... it's more like mu = A x B where A encodes the translation from "one util" in U1 to "one util" in U2 and B encodes the relative amounts the agents matter in the deal. It seems like A is the bit that remains fairly constant across deals and short periods of time, while B can be differently bargained each time relative to the specific deal in question.
If B differes over time, you'll have outcomes that are not Pareto optimal in total. Idealised utility maximising agents should establish μ once and for all at the beggining; each change in μ is paid for in decreased utility.
I'm not claiming that human agents do, or should, behave this way.