Lumifer comments on Cooperating with agents with different ideas of fairness, while resisting exploitation - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (44)
So is the game theory just wrong, then? :-)
No. In this case, game theory says that if both people are using the same logic and they know that, then what I showed above is correct: cooperating is the best choice. However, that is not always the case in reality.
Is it ever the case in reality?
and
It seems so, yes. We don't have absolutely certain frameworks, but we do have contracts that are enforceable by law, and we have strong trust-based networks.
It is worth pointing out that even in fairly sloppy situations, we can still use "if both people are using the same logic and they know that" rule of thumb. For example, I would never decide to carpool if I though that I could not trust the other person to be on time (but I might frequently be late if there was no cost to doing so). When all members of the carpool make this calculation, even a limited amount of evidence that we all agree that that this calculation makes it worth showing up on time is likely to keep the carpool going; that is, if it works well for two days and on the third day Bob shows up late but has a good excuse and is apologetic, we will probably be willing to pick Bob up on the fourth day.
[Edits; I have no clue how to separate two blocks of quoted text.] [Edit: figured it out].