Luke_A_Somers comments on Robust Cooperation in the Prisoner's Dilemma - Less Wrong

69 Post author: orthonormal 07 June 2013 08:30AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (145)

You are viewing a single comment's thread.

Comment author: Luke_A_Somers 06 June 2013 07:12:53PM 3 points [-]

It seems to me that TrollBot is sufficiently self-destructive that you are unlikely to encounter it in practice.

I wonder if there are heuristics you can use that would help you not worry too much about those cases.

Comment author: Vaniver 06 June 2013 07:30:47PM 5 points [-]

If you presume that you're living in an iterated prisoner's dilemma with reproduction according to the payoff matrix, then you can argue that the percentage of TrollBots will decline to negligible for almost all diverse starting populations. So, one can discuss meaningful optimality in that sense, and demonstrate how PrudentBot will do better than FairBot if the population contains CooperateBots.

Comment author: Vaniver 06 June 2013 07:44:38PM *  7 points [-]

Actually, since this is a deterministic setup, you can go one better and consider an 'iterated tournament' as a difference equation in N-dimensional space, where N is the number of strategies you include, and the dimensions represent the proportion of the total population; then you can demonstrate the trajectories the demographics will take from any starting location. There will be a handful of point equilibria, as well as several equilibrium lines (actually, I think, an equilibrium volume, but this depends on which strategies you include), and you can talk about which equilbria are stable / unstable, and decide not to care about strategies who only exist in unstable equilibria. You probably need to require that the population space is seeded with CooperateBot, DefectBot, and possibly FairBot, in order to get neat results.

Comment author: orthonormal 06 June 2013 07:32:21PM 3 points [-]

I wonder that too, but we haven't come up with anything satisfactory on a formal level despite working for a while. Anyone have a good idea?

Comment author: Will_Sawin 07 June 2013 06:39:40PM 6 points [-]

This might be a practical problem rather than a mathematical / philosophical problem. Many human beings, for cultural/biological reasons, think certain strategies in various games of economic interaction are unfair in a basically arbitrarily manner. If you come across a group of unfamiliar intelligences, you might find that they make strategies which randomly punish certain strategies for no apparent (to you) reason. The likelihood of this happening is empircal/scientific, not philosophical.