I think you may be attacking a straw man here. When I was taught about the PD almost 20 years ago in an undergraduate class, our professor made exactly the same point. If there are enough iterations (even if you know exactly when the game will end), it can be worth the risk to attempt to establish cooperation via Tit-for-Tat. IIRC, it depends on an infinite recursion of your priors on the other guy's priors on your priors, etc. that the other guy will attempt to establish cooperation. You compare this to the expected losses from a defection in the firs...
I think you may be attacking a straw man here.
It frustrates me immensely to see how many times this claim is made in the comments of Eliezer's posts. At least 75% of the times I read this I've personally encountered someone who made the "straw" claim. In this case, consult the first chapter of Ken Binmore's "Playing for Real".
Wait wait wait: Isn't this the same kind of argument as in the dilemma about "We will execute you within the next week on a day that you won't expect"? (Sorry, don't know the name for that puzzle.) In that one, the argument goes that if it's the last day of the week, the prisoner knows that's the last chance they have to execute him, so he'll expect it, so it can't be that day. But then, if it's the next-to-last day, he knows they can't execute him on the last day, so they have to execute him on that next-to-last day. But then he expects it! And so on.
So, after concluding they can't execute him, they execute him on Wednesay. "Wait! But I concluded you can't do this!" "Good, then you didn't expect it. Problem solved."
Just as in that problem, you can't stably have an "(un)expected execution day", you can't have an "expected future irrelevance" in this one.
Do I get a prize? No? Okay then.
A more realistic model would let the number of iterations to be unknown to the players. If the probability that the "meta-game" continues in each stage is high enough, it pays to cooperate.
The conclusion that the only rational thing to do in a 100 stage game with perfectly rational players is to defect is correct, but is an artifact of the fact that the number of stages has been defined precisely, and therefore the players can plan to defect at the last moment (which makes them want to defect progressively earlier and earlier). In the real world, this seems rather unlikely.
Silas,
It's called Unexpected hanging paradox and I linked to it in my sketch of the solution to the one-off dilemma. I agree, the same problem seems to be at work here, and it's orthogonal to two-step argument that takes us from mutual cooperation to mutual defection. You need to mark the performance of complete policies established in the model at the start of the experiment, and not reason backwards, justifying the actions that could have changed the consequences by inevitability of consequences. Again, I'm not quite sure how it all ties together.
What Kevin Dick said.
The benefit to each player from mutual cooperation in a majority of the rounds is much more than the benefit from mutual defection in all rounds. Therefore it makes sense for both players to invest at the beginning, and cooperate, in order to establish each other's trustworthiness.
Tit-for-tat seems like it might be a good strategy in the very early rounds, but as the game goes on, the best reaction to defection might become two defections in response, and in the last rounds, when the other party defects, the best response might be all defections until the end.
No, but I damn well expect you to defect the hundredth time. If he's playing true tit-for-tat, you can exploit that by playing along for a time, but cooperating on the hundredth go can't help you in any way, it will only kill a million people.
Do not kill a million people, please.
Do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?No. That seems obviously wrong, even if I can't figure out where the error lies.
If it's actually common knowledge that both players are "perfectly rational" then they must do whatever game theory says.
But if the paperclip maximizer knows that we're not perfectly rational (or falsely believes that we're not) it will try and achieve a better score than it could get if we were in fact perfectly rational. It will do this by cooperating, at least for a time.
I think correct strategy gets profoundly complicated when one side believes the other side is not fully rational.
I THINK rational agents will defect 100 times in a row, or 100 million times in a row for this specified problem. But I think this problem is impossible. In all cases there will be uncertainty about your opponent/partner - you won't know its utility function perfectly, and you won't know how perfectly it's implemented. Heck, you don't know your OWN utility function perfectly, and you know darn well you're implemented somewhat accidentally. Also, there are few real cases where you know precisely when there will be no further games that can be affected b...
Shut up and multiply. Every time you make the wrong choice, 1 million people die. What is your probability that Clippy is going to throw that first C? How did you come to that? You are not allowed to use any version of thinking back from what you would want Clippy to do, or what you would do in its place if you really I promise valued only paperclips and not human lives.
You throw a C, Clippy throws a D. People die, 99 rounds to go. You have just shown Clippy that you are at least willing to cooperate. What is your probability that Clippy is going to throw a C next? Ever?
You throw a C, Clippy throws a D. People die, 98 rounds to go. Are you showing Clippy that you want to cooperate, so it can safety cooperate, or are you just an unresponsive player who will keep throwing Cs no matter what he does? And what does it say to you that Clippy has thrown 2 Ds?
Alternate case, round 1: you throw a C, Clippy throws a C. People live, 99 rounds to go. At what point are you planning to start defecting? Do you think Clippy can't work out that logic too? When do you think Clippy is planning to start defecting?
Finitely iterated prisoner's dilemma is just like the traveler's dilemma, on which see this article by Kaushik Basu. The "always defect" choice is always a (in fact, the only) Nash equilibrium and an evolutionarily stable strategy, but it turns out that if you measure how stable it is, it becomes less stable as the number of iterations increases. So if there's some kind of noise or uncertainty (as Dagon points out), cooperation becomes rational.
If you cooperate even once, the common 'knowledge' that you are both classical game theorists is revealed (to all parties) to be false, and your opponent will have to update estimates of your future actions.
Carl - good point.
I shouldn't have conflated perfectly rational agents (if there are such things) with classical game-theorists. Presumably, a perfectly rational agent could make this move for precisely this reason.
Probably the best situation would be if we were so transparently naive that the maximizer could actually verify that we were playing naive tit-for-tat, including on the last round. That way, it would cooperate for 99 rounds. But with it in another universe, I don't see how it can verify anything of the sort.
(By the way, Eliezer, how much communi...
Zubon,
When do you think Clippy is planning to start defecting?
If Clippy decides the same way as I do, then I expect he starts defecting at the same turn as I do. The result is 100x C,C. There is no way how identical deterministic algorithms with the same input can result in different outputs, so in each turn, C,C or D,D are the only possibilities. It's rational to C.
However, "realistic" Clippy uses different algorithm which is unknown to me. Here I genuinely don't know what to do. To have some preference to choose C over D or conversely, I would ...
The backwards reasoning in this problem is the same as is used in the unexpected hanging paradox, and similar to a problem called Guess 2/3 of the Average. This is where a group of players each guess a number between 0 and 100, and the player whose guess is closest to 2/3 of the average of all guesses wins. With thought and some iteration, the rational player can conclude that it is irrational to guess a number greater than (2/3)100, (2/3)^2100, (2/3)^n*100, etc. This has a limit at 0 when n -> ∞, so it is irrational to guess any number greater than zer...
Eliezer: the rationality of defection in these finitely repeated games has come under some fire, and there's a HUGE literature on it. Reading some of the more prominent examples may help you sort out your position on it.
Start here:
Robert Aumann. 1995. "Backward Induction and Common Knowledge of Rationality." Games and Economic Behavior 8:6-19.
Cristina Bicchieri. 1988. "Strategic Behavior and Counterfactuals." Synthese 76:135-169.
Cristina Bicchieri. 1989. "Self-Refuting Theories of Strategic Interaction: A Paradox of Common Knowledge." Erkenntnis 30:69-85.
Ken Binmore. 1987. "Modeling Rational Players I." Economics and Philosophy 3:9-55.
Jon Elster. 1993. "Some unresolved problems in the theory of rational behaviour." Acta Sociologica 36: 179-190.
Philip Reny. 1992. "Rationality in Extensive-Form Games." The Journal of Economic Perspectives 6:103-118.
Phillip Petit and Robert Sugden. 1989. "The Backward Induction Paradox." The Journal of Philosophy 86:169-182.
Brian Skyrms. 1998. "Subjunctive Conditionals and Revealed Preference." Philosophy of Science 65:545-574
Robert Stalnaker. 1999. "Knowledge, Belief and Counterfactual Reasoning in Games." in Cristina Bicchieri, Richard Jeffrey, and Brian Skyrms, eds., The Logic of Strategy. New York: Oxford University Press.
I've wondered about, and even modeled versions of the fixed horizon IPD in the past. I concluded that so long as the finite horizon number is sufficiently large in the context of the application (100 is large for prison scenarios, tiny for other applications), a proper discounted accounting of future payoffs will restore TFT as an ESS. Axelrod used discounting schemes in various ways in his book(s).
The undiscounted case will always collapse. Recursive collapse to defect is actually rational and a good model for some situations, but you are right, in other ...
prase, Venkat: There is nothing symmetrical about choices of two players. One is playing for paperclips, another for different number of lives. One selects P2.Decision, another selects P1.Decision. How to recognize the "symmetry" of decisions, if they are not called by the same name? What makes it the answer in that case?
prase: It's Two envelopes problem.
As Paul says, this is very well trodden ground. Since it hasn't been assumed that we are sure we know how the other party reasons, we might want to invest some early rounds in probing to see how the party thinks.
Eliezer: the rationality of defection in these finitely repeated games has come under some fire, and there's a HUGE literature on it. Reading some of the more prominent examples may help you sort out your position on it.
My position is already sorted, I assure you. I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.
As Paul says, this is very well trodden ground. Since it hasn't been assumed that we are sure we know how the other party reasons, we might want to invest some early rounds in probing to see...
This "perfectly rational" game-theoretic solution seems to be fragile, in that the threshold of "irrationality" necessary to avoid N out of N rounds of defection seems to be shaved successively thinner as N increases from 1.
Also, though I don't remember the details, I believe that slight perturbations in the exact rules may also cause the exact game-theoretic solution to change to something more interesting. Note that adding uncertainty in the exact number of rounds has the effect of removing your induction premise: e.g., a 1% chance of...
"As someone who rejects defection as the inevitable rational solution to both the one-shot PD and the iterated PD, I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD."
... And I'm interested in your justification for potentially not defecting in the one-shot PD.
I see no contradiction in defecting in the one-shot but not iterated. As has been mentioned, as the number of iterations increases the risk to reward ratio ...
I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.
This strategy would apply to the first round. For the iterated game, would you thereafter apply Tit for tat?
I thought the aim is to win isn't it? Clearly, whats best for both of them is to cooperate at every step. In the case that paperclipper is something like what most people here think say 'rationality' is, it will defect everytime, and thus Humans would also defect, leading to not the best utility total possible.
However, If you think of the Paperclipper as something like us with different terminal values, surely cooperating is best? It knows, as we do, that defecting gives you more if the other cooperates, but defecting is not a winning strategy in the long ...
I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega. This strategy would apply to the first round. For the iterated game, would you thereafter apply Tit for tat?
The strategy applies to every round equally, if the Paperclipper is in fact behaving as I expect. If the Paperclipper doesn't behave as I expect, the strategy is unuseful, and I might well switch to Tit for Tat.
"And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"..."
Plus regret of information. In a mixed population of classical decision theory (CDT) agents and Tit-for-Tat (TFT) agents, paired randomly and without common knowledge of one another's types, the CDT agents ...
You didn't say in the post that the other party was "perfectly rational". If we knew that and knew what it meant, of course the answer would be obvious.
I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.
[...] What if neither party to the IPD thinks there's a realistic chance that the other party is stupid - if they're both superintelligences, say?
It's never worthwhile to cooperate in the one shot case, unless the two players' actions are linked in some Newcomb-esque way.
In the iterated case, if there's even a fairly small chance that the other player will try to establish...
If "rational" actors always defect and only "irrational" actors can establish cooperation and increase their returns, this makes me question the definition of "rational".
However, it seems like the priors of a true prisoner's dilemma are hard to come by (absolutely zero knowledge of the other player and zero communication). Don't we already know more about the paperclip maximizer than the scenario allows? Any superintelligence would understand tit-for-tat playing, and know that other intelligences should understand it as well. ...
Maybe I'm an aberration, but my Introductory Microeconomics professor actually went over this the same way you did regarding the flaw of tit for tat. It confuses me that anyone would teach it differently.
I'm almost seeing shades of Self-PA here, except it's Self-PA that co-operates.
If I assume that the other agent is perfectly rational, and if I further assume that whatever I ultimately choose to do will be perfectly rational (hence Self-PA), then I know that my choice will match that of the paperclip maximizer. Thus, I am now choosing between (D,D) and (C,C), and I of course choose to co-operate.
V.Nesov: There is nothing symmetrical about choices of two players. One is playing for paperclips, another for different number of lives. One selects P2.Decision, another selects P1.Decision. How to recognize the "symmetry" of decisions, if they are not called by the same name?
The decision processes can be isomorphic. We can think about the paperclipper being absoulutely the same as we are, except valuing paperclips instead of our values. This of course assumes we can separate the thinking into "values part" and "algorithmic part&q...
I'm interested in the inconsistency of those who accept defection as the rational equilibrium in the one-shot PD, but find excuses to reject it in the finitely iterated known-horizon PD.
I don't see the inconsistency.
Defect is rational in the one-shot game provided my choice gives me no information about the other player's choice.
In contrast, the backwards induction result also relies on common knowledge of rationality (which, incidentally, seems oddly circular: if I cooperate in the first round, then I demonstrate that I'm not "rational" in the t...
There's a dilemma or a paradox here only if both agents are perfectly rational intelligences. In the case of humans vs aliens, the logical choice would be "cooperate on the first round, and on succeeding rounds do whatever its opponent did last time". The risk of losing the first round (1 million people lost) is worth taking because of the extra 98-99 million people you can potentially save if the other side also cooperates.
Decision theory is enough to advise actions - so why do we need game theory? A game theory is really just a theory about the distribution over how other agents think. Given such a distribution, decision theory is enough to tell you what to do. So any simple game theory, one that claimed with certainty that all other agents always think a particular way, must be wrong. Of course sometimes a simple game theory can be good enough - if slight variations from some standard way of thinking doesn't make much difference. But when small variations can make large differences, the only safe game theory is a wide distribution over the many ways other agents might think.
What do you think would happen if Prisoner's Dilemma is framed differently?
Do you think this framing would affect your inititial reaction? General population?
(The wording of the choises is not very elegant, and I am not sure whether presentation is sufficiently symmetrical, but you get the basic idea).
It could be that words such as "prisoner", "prison sentence", "guard" or even "game" and "defect" frame more people to intuitively avoid co-operation.
Does this imply that YOU would one-box Newcomb's offer with Clippy as Omega? And that you think at least some Clippies would take just one box with you as Omega?
For the problem as stated, what probability would you assign to Clippy's Cooperation (on both the one-shot or fixed-iteration, if they're different).
What is the point in talking about 'bias' and 'rationality' when you cannot even agree what those words mean?
What would a rational entity do in an Iterated Prisoner's Dilemma? Do any of you have something substantive to say about that question, or is it all just speculation and assertion?
Mike Blume: I'm almost seeing shades of Self-PA here, except it's Self-PA that co-operates.
+1 Perceptive to Blume!
Mikko, your poll is not the Prisoner's Dilemma - part of the payoff matrix is reversed.
Eliezer: I cooperate with the Paperclipper if I think it will one-box on Newcomb's Problem with myself as Omega.
Isn't that tantamount to Clippit believing you to be omnipotent though? If I thought my co-player was omnipotent I'm pretty certain I'd be cooperating.
Or are you just looking for a co-player who shuts up and calculates/chooses straight? In which case, good heuristic I suppose.
But Eliezer, you can't assume that Clippy uses the same decision making process that you do unless you know that you both unfold from the same program with different utility functions or something. If you have the code that unfolds into Clippy and Clippy has the code that unfolds into you it may be that you can look at Clippy's code and see that Clippy defects if his model of you defects regardless of what he does and cooperates if his model of you cooperates if your model of him cooperates, but you don't have his code. You can't say much about all possible minds or about all possible paperclip maximizing minds.
And are you really "exploiting" an "irrational" opponent, if the party "exploited" ends up better off? Wouldn't you end up wishing you were stupider, so you could be exploited - wishing to be unilaterally stupider, regardless of the other party's intelligence? Hence the phrase "regret of rationality"...
Eliezar, you are putting words in your opponents' mouths, then criticizing their terminology.
"Rationality" is I think a well-defined term in game theory, it doesn't mean the same thing as "smart". I...
Mike:
We don't need to assume that Clippy uses the same decision process as us. I might suggest we treat Clippy as a causal decision theorist who has an accurate model of us. Then we ask which (self, outside_model_of_self) pair we should choose to maximize our utility, constrained by outside_model_of_self = self. In this scenario TFT looks pretty good.
Regret of rationality in games isn't a mysterious phenomenon. Let's suppose that after the one round of PD we're going to play I have the power to destroy a billion paperclips at the cost of one human life, and Clippy knows that. If Clippy thinks I'm a rational outcome-maximizer, then he knows that whatever threats I make I'm not going to carry out, because they won't have any payoffs when the time comes. But if it thinks I'm prone to irrational emotional reactions, it might conclude I'll carry out my billion-paperclip threat if it defects, and so cooperate.
Actually, that's (nearly) equivalent to asking if you would defect in the non-iterated game, and you've said you would not given a one-boxing Clippy.
Pete, if you do that then being a casual decision theorist won't, you know, actually Win in the one shot case. Note that evolution doesn't produce organisms that cooperate in one shot prisoners dilemmas.
I propose the following solution as the most optimal. It is based on two assumptions.
We'll call the two sides Agent 1 (Humanity) and Agent 2 (Clippy).
Assumption 1: Agent 1 knows that Agent 2 is logical and will use logic to decide how to act and vise-versa.
This assumption simply means that we do not expect Clippy to be extremely stupid or randomly pick a choice every time. If that were the case, a better strategy would be to "outsmart" him or find a statistical solution.
Assumption 2: Both agents know each other's ultimate goal/optimization target...
George: It is trivial to construct scenarios in which being known to be "rational" in the game theory sense is harmful, but in all such cases it is being known to be rational which is harmful, not rationality itself.
Yes, but if you can affect what others know about you by actually ceasing to be "rational", and it will be profitable, persisting in being "rational" is harmful.
Yes, but if you can affect what others know about you by actually ceasing to be "rational", and it will be profitable, persisting in being "rational" is harmful.
So it can be irrational to be rational, and rational to be irrational? Hmm. I think you might want to say, rather, that an element of unpredictability (ceasing to be predictable) would be called for in this situation, rather than "irrationality". Of course, that leads to suboptimality in some formal sense, but it wins.
Change the problem and you change the solution.
If we assume that Eli and Clippy are both essentially self-modifying programs capable of verifiably publishing their own source codes, then indeed they can cooperate:
Eli modifies his own source code in such a way that he assures Clippy that his cooperation is contingent on Clippy's revealing his own source code and that the source code fulfills certain criteria, Clippy modifies his source code appropriately and publishes it.
Now each knows the other will cooperate.
But I think that although we in some ways resem...
Mike:
ah, I guess I wasn't looking at what you were replying to. I was thinking of a fixed number of iterations, but more than one.
I think you guys are calculating too much and talking too much.
Regardless of the "intelligence" of a PM, in my world that is a pretty stupid thing to do. I would expect such a "stupid" agent to do chaotic things indeed evil things. Things I could not predict and things I could not understand.
In an interactioin with a PM I would not expect to win, regardless of how clever and intelligent I am. Maybe they only want to make paperclips (and play with puppies), but such an agent will destroy my world.
I have worked with such PM's.
I would never voluntarily choose to interact with them.
Marshall I think that's a bit of a cop-out. People's lives are at stake here and you have to do something. If nothing else, you can simply choose to play defect, worst case the PM does the same, and you save a billion lives (in the first scenario). Are you going to phone up a billion mothers and tell them you let their children die so as not to deal with a character you found unsavory? The problem's phrased the way it is to take that option entirely off the table.
Yes, it will do evil things, if you want to put it that way. Your car will do evil things...
Marshall I think that's a bit of a cop-out.
Why wouldn't a PM cheat? Why would it ever remain inside the frame of the game?
Would two so radically different agents even recognize the same pay-off frame?
"The different one" will have different pay-offs - and I will never know them and am unlikely to benefit fra any of them.
In my world a PM is chaotic, just as I am chaotic in his. Thus we are each other's enemy and must hide from the other.
No interaction because otherwise the number of crying mothers and car dealerships will always be higher.
Hi all,
(First comment here. Please tell me if I do something stupid.)
So, I've been trying to follow along at home and figure out how to formulate a theory that would allow us to formalize and justify the intuition that we should cooperate with Clippy "if that is the only way to get Clippy to cooperate with us" (even in a non-iterated PD). I've run into problems with both the formalizing and the justifying part (sigh), but at least I've learned some lessons along the way that were not obvious to me from the posts I've read here so far. (How's that...
Asking how a "rational" agent reasons about the actions of another "rational" agent is analogous to asking whether a formal logic can prove statements about that logic. I suggest you look into the extensive literature on completeness, incompleteness, and hierarchies of logics. It may be that there are situations such that it is impossible for a "rational" agent to prove what another, equally-rational agent will conclude in that situation.
I'm sure most people here are aware of Axelrod's classic "experiment" with an Iterated Prisoner's Dilemma tournament in which experts from around the world were invited to submit any strategy they liked, with the strategy which scored the highest over several rounds with each of the other strategies winning, and in which Tit for Tat came out top (Tit for Two Tats winning a later rerun. Axelrod's original experiment was fixed-horizon, and every single "nice" strategy (never defect first) that was entered finished above every single "...
Michael Vassar: Note that evolution doesn't produce organisms that cooperate in one shot prisoners dilemmas.
I put myself forwards as counter-evidence.
I put myself forwards as counter-evidence.I put forward all organisms that have evolved to thrive in multiply-iterated prisoner's dilemma scenarios, but not to distinguish single iterations from multiple iterations.
Which is pretty much every organism with a capacity for altruism.
Benja: This breaks the implicit decision theoretic premise that your payoff depends only on the action you choose, not on the process you use to arrive at that choice
Correct! The next step in the argument, if you were going to formulate my timeless decision theory, is to describe a new class of games in which your payoff depends only on the type of decision that you make or on the types of decision that you make in different situations, being the person that you are. The former class includes Newcomb's Problem; the latter class further includes the co...
Hi. Found the site about a week ago. I read the TDT paper and was intrigued enough to start poring through Eliezer's old posts. I've been working my way through the sequences and following backlinks. The material on rationality has helped me reconstruct my brain after a Halt, Melt and Catch Fire event. Good stuff.
I observe that comments on old posts are welcome, and I notice no one has yet come back to this post with the full formal solution for this dilemma since the publication of TDT. So here it is.
Whatever our opponent's decision algorithm may be...
Why is this different in scenarios where you don't know how many rounds will occur?
So long as it's a finite number then defection would appear rational to the type of person who would defect in a noniterated instance.
In a 100 round game, one could precommit to play tit for tat no matter what (including cooperating on the 100th round if the opponent cooperated on the 99th). The opponent will do slightly better than oneself by cooperating 99 rounds and defecting on the 100th, but this is still better than if I had chosen to defect on the 100th round, as my opponent would have seen my precommit to be non-genuine and defected on the 99th round (and maybe even more). If I could have the paperclip maximizer use this strategy and I get to cooperate 99 times and defect once, that would be even better...but it won't happen. Oh well, I'll take 99 (C, C)s and 1 (C, D).
I am a dedicated Paperclipper. Ask anyone who knows me well enough to have seen me in a Staples!
As such, I use my lack of human arrogance and postulate that at least some of the entities playing the IPD have intelligence on the order of my own. I do not understand what they are playing for, "1 million human lives" means virtually nothing to me, especially in comparison to a precious precious paperclip, but I assume by hypothesis that the other parties are playing a game similar enough to my own that we can communicate and come to an arrangement.
N...
Got me to register, this one. I was curious about my own reaction, here.
See, I took in the problem, thought for a moment about game theory and such, but I am not proficient in game theory. I haven't read much of it. I barely know the very basics. And many other people can do that sort of thinking much better than I can.
I took a different angle, because it should all add up to normality. I want to save human lives here. For me, the first instinct on what to do would be to cooperate on the first iteration, then cooperate on the second regardless of whether ...
[I realize that I missed the train and probably very few people will read this, but here goes]
So in non-iterated prisoner's dilemma, defect is a dominant strategy. No matter what the opponent is doing, defecting will always give you the best possible outcome. In iterated prisoner's dilemma, there is no longer a dominant strategy. If my opponent is playing Tit-for-Tat, I get the best outcome by cooperating in all rounds but the last. If my opponent ignores what I do, I get the best outcome by always defecting. It is true that all defects is the unique Nash ...
Followup to: The True Prisoner's Dilemma
For everyone who thought that the rational choice in yesterday's True Prisoner's Dilemma was to defect, a follow-up dilemma:
Suppose that the dilemma was not one-shot, but was rather to be repeated exactly 100 times, where for each round, the payoff matrix looks like this:
As most of you probably know, the king of the classical iterated Prisoner's Dilemma is Tit for Tat, which cooperates on the first round, and on succeeding rounds does whatever its opponent did last time. But what most of you may not realize, is that, if you know when the iteration will stop, Tit for Tat is - according to classical game theory - irrational.
Why? Consider the 100th round. On the 100th round, there will be no future iterations, no chance to retaliate against the other player for defection. Both of you know this, so the game reduces to the one-shot Prisoner's Dilemma. Since you are both classical game theorists, you both defect.
Now consider the 99th round. Both of you know that you will both defect in the 100th round, regardless of what either of you do in the 99th round. So you both know that your future payoff doesn't depend on your current action, only your current payoff. You are both classical game theorists. So you both defect.
Now consider the 98th round...
With humanity and the Paperclipper facing 100 rounds of the iterated Prisoner's Dilemma, do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?