Perplexed comments on If you don't know the name of the game, just tell me what I mean to you - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (26)
But you don't show this, you simply claim it without proof or explanation when you write:
No explanation, no calculations. Maybe if you had written "a quarter carrier plus 3/4 euros" a reader could reconstruct your thinking. (And you don't provide even an example of the "iterated independently" part of the claim. Presumably, you are "iterating" with changes to the game payoffs between each iteration.) It also is extremely puzzling that in this posting you are saying that NBS and KSBS are not Pareto optimal when in the last posting, it seemed that they were Pareto by definition. What has changed?
If you had analyzed this, your posting here might have been more clear and more interesting. You would have pointed out that if our protagonists wait to bargain until the coin has been flipped and it is by-that-time known which bargainer has the chance at a carrier, then you get a post-flip Pareto-optimal KSBS or NBS bargain which is not pre-flip Pareto optimal. That is, the numerical meaning of Pareto-optimality changes when the coin is flipped. Or, more precisely, the meaning changes when the bargainers learn the results of the coin flip - when their best model of the world changes.
Or, in the local jargon, to use the phrase Pareto-optimal as if it were an objective property of a given bargain is to commit the "mind projection fallacy".
To summarize my disagreement with this one of your claims - you have not shown anything wrong with KSBS or NBS - you have merely shown that the "optimal" bargain depends upon what the bargainers know, and that sometimes what you know hurts you.
Perhaps because my undergrad degree was in economics, this item struck me as so trivial that it didn't seem even worth mentioning. But maybe you are correct that it is worth spelling out in detail. However, even here there are interesting points that you could have made, but didn't.
The first point is that your μ factor, as well as U1 and U2, are not pure real numbers, they are what a scientist or engineer would call dimensioned quantities. Say that U1 is denominated in apples, and U2 is denominated in oranges. So it is mathematical nonsense to even try to add U1 to U2 (as naive utilitarianism requires) unless a conversion factor μ is provided (denominated in apples/orange).
The second is to point out more clearly that every bargaining solution (including KSBS and NBS) is equivalent to a choice of a μ such that U1 +μU2 gets maximized. Your real claim here is the insistence upon dynamic consistency. Choose μ before the coin gets flipped, you advise, and then stick with that choice even after you know the true state of the world.
And then, if you are familiar with the work of John Rawls, point out that this advice is roughly equivalent to Rawls's "Veil of Ignorance".
Now that might have been interesting.
Another direction you might have gone with this series is to continue the standard textbook development of bargaining theory - first covering Nash's 1953 paper in which he shows how to select a disagreement point taking into account the credible threats which each bargainer can make against the other, and then continuing to the Harsanyi/Seldon theory for games with incomplete information, and then continuing on through the modern theory of mechanism design. Smart people have been working hard on these kinds of problems for more than 50 years, so there is little a smart amateur can add unless he first becomes familiar with existing results. My main complaint about your attempt is that you quite clearly are not familiar. This stuff is not rocket science. Papers and tutorials are available online. Go get them.
Actually, this is misstated. Misstated as badly as the equivalent claim that in some games it helps to be irrational.
What helps is not being irrational, the thing that helps is being thought irrational (even if the only way to be thought irrational is to actually be irrational).
And in this case, similarly, it is not what you know that hurts you, it is what the other guy knows.
Hmm. Did you see:
"Information Hazards: A Typology of Potential Harms from Knowledge"
...?