This is prompted by Scott's excellent article, Meditations on Moloch.
I might caricature (grossly unfairly) his post like this:
- Map some central problems for humanity onto the tragedy of the commons.
- Game theory says we're doomed.
- Incentives for government employees sometimes don't match the needs of the people.
- This has costs, and those costs help explain why some things that suck, suck.
- Map some central problems for humanity onto the iterated prisoner's dilemma.
- Evolutionary game theory says we're not doomed.
This is cooperation. The hard part is in jumping out, and getting the other person to change games with you, not in whether or not better games to play exist.
Moloch has discovered reciprocal altruism since iterated prisoner's dilemmas are a pretty common feature of the environment, but because Moloch creates adaptation-executors rather than utility maximizers, we fail to cooperate across social, spatial, and temporal distance, even if the payoff matrix stays the same.
Even if you have an incentive to switch, you need to notice the incentive before it can get you to change your mind. Since many switches require all the players to cooperate and switch at the same time, it's unlikely that groups will accidentally start playing the better game.
Convincing people that the other game is indeed better is hard when evaluating incentives is difficult. Add too much complexity and it's easy to imagine that you're hiding something. This is hard to get past since moving past it requires trust, in a context where we maybe are correct to distrust people -- i.e. if only lawyers know enough law to write contracts, they should probably add loopholes that lawyers can find, or at least make it complicated enough that only lawyers can understand it, so that you need to continue to hire lawyers to use your contracts. In fact contracts are generally complicated and full of loopholes and basically require lawyers to deal with.
Also, most people don't know about Nash equilibria, economics, game theory, etc., and it would be nice to be able to do things in a world with sub-utopian levels of understanding incentives. Also, trying to explain game theory to people as a substep of getting them to switch to another game runs into the same kind of justified mistrust as the lawyer example -- if they don't know game theory and you're saying that game theory says you're right, and evaluating arguments is costly and noisy, and they don't trust you at the start of the interaction, it's reasonable to distrust you even after the explanation, and not switch games.