DanielLC comments on Other prespective on resolving the Prisoner's dilemma - Less Wrong

11 Post author: Stuart_Armstrong 04 June 2013 04:13PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (34)

You are viewing a single comment's thread.

Comment author: DanielLC 04 June 2013 06:08:55PM 6 points [-]

They don't have to be known to be impossible. Just unlikely. If you're facing someone similar to yourself, it might be that choosing to defect makes it more likely that they defect, and enough so to counter out any gain you'd have, but you still don't know they'll defect.

Comment author: Eliezer_Yudkowsky 05 June 2013 06:26:42AM 2 points [-]

Came here to say that, see it's been said. If your actions don't approach the choice you would make given impossibility, as the probability of something approaches (but does not reach) zero, then you must either be assigning infinite utility to something or you must not be maximizing expected utility.

Comment author: ThisSpaceAvailable 04 June 2013 08:01:03PM 0 points [-]

When you say that choosing to defect might make it more likely that they defect, do you mean that choosing to defect may cause the probability that the other person will defect to go up, or do you mean that the probability of the other player defecting, given that you defected, may be greater than the probability given that you cooperated?

To quote Douglas Adams, "The impossible often has a kind of integrity to it which the merely improbable lacks." If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable, even if the probability of an on-diagonal result is 99.99%; as long as the possibility exists, one should take it into consideration.

Comment author: Eliezer_Yudkowsky 05 June 2013 06:28:29AM 4 points [-]

If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable

If the probability is epsilon, then having the probability be zero is only an epsilon stronger argument. If you doubt this let epsilon equal 1/googolplex.

Comment author: DanielLC 05 June 2013 03:54:26AM 1 point [-]

I mean the second one. Also, if I said the first one, I would mean the second one. They're the same by the definitions I use. The second one is more clear.

If it is impossible to have off-diagonal results, that is a much stronger argument for cooperating than having it be improbable, even if the probability of an on-diagonal result is 99.99%; as long as the possibility exists, one should take it into consideration.

If the probability of an on-diagonal result is sufficiently high, and the benefit of an off-diagonal one is sufficiently low, that is all that's necessary for it to be worth while to cooperate.

Comment author: Stuart_Armstrong 04 June 2013 07:07:08PM 0 points [-]

Yes. I model "unlikely" as "I likely live in a universe where these outcomes are impossible", but that's just an unimportant different in perspective.

Comment author: DanielLC 05 June 2013 03:51:40AM 2 points [-]

I likely live in a universe where these outcomes are impossible

What do you mean by "impossible"? If you mean highly unlikely, then you're using recursive probability, which doesn't make a whole lot of sense. If you mean against the laws of physics, then it's false. If you mean that it won't happen, then it's just a longer way of saying that those outcomes are unlikely.

Comment author: Stuart_Armstrong 05 June 2013 11:57:06AM 0 points [-]

it's just a longer way of saying that those outcomes are unlikely.

Comment author: Decius 04 June 2013 09:52:23PM -1 points [-]

What if you are playing with someone and their decision on the current round does not affect your decision in the current round?

If you are known to cooperate because it means that your opponent (who is defined as 'similar to yourself'), then your opponent knows he is choosing between 3 points and 5 points. Being like you, he chooses 3 points.

If you are playing against someone whose decision you determine, (or influence) then you choose the square; if the nature of your control prevents you from choosing 5 or 0 (or makes those very unlikely) points but allows you to choose 3 or 1 (or make one of those very likely), choose 3. However, there only one player in that game.

Comment author: DanielLC 05 June 2013 03:56:19AM 0 points [-]

I don't care which way the causal chain points. All I care about is if the decisions correlate.

Also, I'm not sure of most of what you're saying.

Comment author: Decius 05 June 2013 05:55:38AM 0 points [-]

Given the choice between 0 points and 1 point, you would prefer 1 point; given the choice between 3 points and 5 points, you would prefer 3 points. (Consider the case where you are playing a cooperatebot; the choice which correlates is cooperation; against a defectbot, the choice which correlates is defection. There are no other strategies in single PD without the ability to communicate beforehand.)

Comment author: DanielLC 05 June 2013 10:28:02PM *  0 points [-]

Why would you prefer three points to five points? Aren't points just a way of specifying utility? Five points is better than three points by definition.

Comment author: Decius 05 June 2013 11:00:02PM *  0 points [-]

Right- which means defectbot is the optimal strategy. However, when playing against someone who is defined to be using the same strategy as you, you get more points by using the other strategy.

It should not be the case that two players independently using the optimal option would score more if the optimal option were different.