JoshuaZ comments on Open thread, Jan. 26 - Feb. 1, 2015 - Less Wrong

6 Post author: Gondolinian 26 January 2015 12:46AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (431)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cube 27 January 2015 09:49:53PM 0 points [-]

I'm looking for a mathematical model for the prisoners dilemma that results in cooperation. Anyone know where I can find it?

Comment author: JoshuaZ 27 January 2015 09:53:33PM *  4 points [-]

Can you be more precise? Always cooperating in the prisoner's dilemma is not going to be optimal. Are you thinking of something like where each side is allowed to simulate the other? In that case, see here.

Comment author: Cube 28 January 2015 07:21:17AM 1 point [-]

I'm definitely looking for a system where agent can see the other, although just simulating doesn't seem robust enough. I don't understand all the terms here but the gist of it looks as if there isn't a solution that everyone finds satisfactory? As in, there's no agent program that properly matches human intuition?

I would think that the best agent X would cooperate iff (Y cooperates if X cooperates). I didn't see that exactly.. I've tried solving it myself but I'm unsure of how to get past the recursive part.

It looks like I may have to don a decent amount of research before I can properly formulize my thoughts on this. Thank you for the link.

Comment author: JoshuaZ 28 January 2015 01:22:43PM 1 point [-]

Essentially this is an attempt to get past the recursion. The key issue is that one can't say "X would cooperate iff (Y cooperates if X cooperates)" because one needs to talk about provability of cooperation.

Comment author: BrassLion 28 January 2015 02:02:32AM 0 points [-]

To clarify, the definition of the prisoner's dilemma includes it being a one-time game where defecting generates more utility for the defector than cooperating, no matter what the other player chooses.