I've had a thought I don't recall having encountered before described quite this way; but, given my past experiences with such thoughts, and the fact that it involves evo-psych, I currently peg my confidence in this idea at around 10%. But just in case this particular idea rose to my attention out of all the other possible ideas that didn't for a reason, I'll post it here.

 

One of the simpler analyses of the Prisoner's Dilemma points out that if you know that the round you're facing is the last round, then there's no reason not to defect; your choice no longer has any influence over future rounds, and whatever your opponent does, you gain a higher score by defecting on this particular round than by cooperating. Thus, any rational algorithm which is attempting to maximize its score, and can identify which round is the last round, will gain a higher score by adding a codicil to defect on the last round.

Expanding that idea implies that if such a "rational" algorithm is facing other seemingly rational algorithms, it will assume that they will also defect on the last round; and thus, such an algorithm faced with the /second/-last round will be able to assume that its actions will have no influence on the actions of the last round; and, by a similar logic, will choose to defect on the second-last round; and the third-last; and so forth. In fact, if the whole game has a maximum length, then this chain of logic applies, leading to programs that are, in effect, always-defect. Cooperative strategies such as tit-for-tat thus tend to arise when the competing algorithms lack a particular piece of information: the length of the game they are playing.

 

Depending on where a person is born and lives (and various other details), they have roughly a fifty percent chance of living to 80 years of age, a one-in-a-million chance of making it to 100 years, and using LaPlace's Sunrise Formula, somewhere under one-in-a-hundred-billion odds of making it to 130 years. If a person assumes that their death is the end of them, then they have a very good idea of what their maximum lifespan will be; and depending on how rational they are, they could follow a similar line of reasoning to the above and plan their actions around an "always defect" style of morality. (Eg, stealing whenever the profit outweighs the risk times the punishment.)

However, introducing even an extremely vague concept of an afterlife, even if it's only that some form of individuality survives and can continue to interact with someone, means that there is no surety about when the 'game' will end - and, thus, can nudge people to act cooperatively, even when there is no physical chance of getting caught at defecting. Should this general approach spread widely enough, then further refinements could be made which increase cooperative behaviour further, such as reports on what the scoring system of the afterlife portion of the 'game' are; thus increasing in-group cooperative behaviour yet further.

Interestingly, this seems to apply whether the post-mortal afterlife is supernatural in nature, or takes the form of a near-term technological singularity, or a cryonicist who estimates a 5% chance of revival within a millennium.

 

What I would like to try to find out is which shapes of lifespan estimation lead to what forms of PD algorithms predominating. For example, a game with a 50% chance of continuing on any turn after turn 100, versus one with a 95% chance every turn, versus one with a straight 5% chance of being effectively infinite. If anyone reading this already has a set of software allowing for customized PD tournaments, I'd like to get in touch. Anyone else, I'd like whatever constructive criticism you can offer, from any previous descriptions of this - preferably with hard figures and numbers backing it up - to improvements that bring the general concept more into line with reality.

New Comment
69 comments, sorted by Click to highlight new comments since:

Interestingly, this seems to apply whether the post-mortal afterlife is supernatural in nature, or takes the form of a near-term technological singularity, or a cryonicist who estimates a 5% chance of revival within a millennium.

It also applies when extending reputation systems across multigenerational lineages, even in the absence of any expectation of continuing individual identity.

That is, if brains are constructed so as to value the future expected success of family members, and also constructed so as to judge one another in part by the past acts of family members, we should anticipate greater last-turn cooperation in cases where family members exist.

Ah, now that's a lovely wrinkle I hadn't thought of, which could easily overwhelm any of the effect described in the root post here; and would at least need to seriously think about.

(My not having thought of it may from be the same sort of blind spot that arises from psychological studies tending to be performed on Amero/European college students...)

If you have any insights as to why my comment elicited this positive/accepting response, when ChristianKI's comment (which seems to me substantially overlapping) didn't, I would be interested in them. (To a lesser extent, I have the same question about and shminux's comment, but it's clearer to me how someone could fail to connect the concept of "honor" with the concept of intergenerational reputation, despite those two ideas being intimately connected in my head).

If you have any insights as to why my comment elicited this positive/accepting response, when ChristianKI's comment (which seems to me substantially overlapping) didn't, I would be interested in them.

When I read the post, I was planning to respond with:

Of course, there is an obvious, physically real afterlife: one's descendents.

Then I saw that you had already pointed that out. ChristianKI's comment doesn't point that out, which is why I would have responded positively to yours and not to his. (Why do some individuals care about the state of the future after they die?)

theirs

Just for the record, I'm male. Christian is my first name and Kl the first two letters of my last name.

Edited.

Reputations that last inter-generationally can apply and modify behaviours (even including kids trying to get their parents to act better) even if no individual cares about what happens after they-in-particular happen to die.

When shminux mentioned 'honor', my thoughts were more along the lines of an internally-generated code of conduct (eg, "What you are in the dark") than an externally-enforced one; perhaps describable as honne rather than tatemae and giri.

[-]Shmi00

internally-generated code of conduct

Yes, more of a self-respect thing, like not shoplifting even when there is no danger of getting caught. I suppose the word "honor" is too ambiguous.

I took a game theory course in undergrad. It wasn't even designed for mathematicians, it was an econ class. We showed how to calculate given a prisoners dilemma matrix exactly what probability of playing again each round changes the equilibrium strategy from defect to tit for tat. I could explain this if you would like, but my point is this is done, even in an undergrad course.

I am not sure what you mean by "effectively" but the 5% chance of being infinite does not make any sense. You cant have an infinitely repeated prisoners dilemma.

I would love to learn what your undergrad course taught you; if you'd rather point me at a resource to read than explain it, that would be fine.

The 5% thing could be closely approximated by taking a million-turn game, averaging the score to a typical per-round value, and comparing it to other 'infinite' games, to see which has the best long-term life enjoyment.

The million-turn game encourages taking as long as you need to figure out what code the opponent is likely running, then figuring out how to exploit it, thus gaining the maximum benefit for the super majority of the rounds. There will not be a "typical" per round value.

I think walking you through an example would be easier for me than finding a source. Imagine the matrix is 2,2 for both cooperate, 1,1 for both defect, and 3,0 for cooperate defect.

Let's say you repeat the game with probability p each round. We want to determine if both players playing tit for tat is an equilibrium strategy. So, we will assume that both players play tit for tat, and see if they have any incentive for changing.

If we both play tit for tat, our expected outcome is 2+2p+2p^2...=2/(1-p). If we were to want to change our strategy, we would have to do so by defecting at some round. Without loss of generality, assume we do so on the first round. If we defect on only the first round, our output would be 3+0p+2p^2+2p^3... which loses us 2p-1 points. As long as p>1/2, this is a bad idea. The same is true for every round. Every time you add a defect, you have a p percent chance that in the next round the opponent punishes you for twice as much as you gained. so if p>1/2, there is no incentive for defecting.

If p<1/2, both players playing tit for tat is not an equilibrium. However, both players playing grim trigger still might be (cooperate until your opponent defects once, then always defect).

[-]satt30

both players playing grim trigger

Reminds me of the folk theorem in game theory, which looks like it may apply to games that repeat with probability p if p's high enough. (I'm no game theorist; it's just a hunch. My thinking here is that p is like a discount factor, and the theorem still works even with a discount factor, if the discount factor's high enough.) If so, a strong & pervasive enough belief in an afterlife might enable all sorts of equilibria in more complicated games.

p is exactly a like a discount factor.

Yes, if everyone believes that they get huge payoff in the afterlife for using strategy X, then everyone using strategy X is an equilibrium. This is exactly how many religions work.

To be sure that I understand it - by having p set to 1/2, you're referring to there being at least a 50% chance that there'll be at least one more round of the game?

If so, I'm somewhat surprised that the odds which make tit-for-tat a winning strategy are such a simple number, which implies that I didn't understand the underlying aspects of PD strategy as well as I should. I'm going to have to think a bit more about them.

Yes. Every round you stop playing with probability 1/2 and continue playing with probability 1/2.

The answer 1/2 is a function of the PD matrix I chose. If you choose a different matrix, you will get a different number.

After a night of thought; if I'm reading this right, then your described method of discounting only considers a single future round's 50% probability. But, if we extend it, to a 25% chance of two future rounds, and 12.5% for three rounds, and so forth, doesn't that converge on a total of 100% for all future rounds summed up?

I think you are confused.

All you are saying is that if each round you have a 1/2 chance of playing the next round, then the game will last exactly 1 round with probability 1/2, exactly 2 rounds with probability 1/4, exactly 3 rounds with probability 1/8 and so on. Of course this adds up to one, since it is a probability distribution on all possible lengths of the game, and probability distributions always sum to one. The fact that it sums to 1 has nothing to do with game theory.

It's the more-than-one-round calculation that I'm currently trying to wrap my brain around, rather than the sum of a series of halves adding to one. If there's a 1/3 chance of each round continuing, then that also adds up, with 1/9 of the second round's value, and 1/27 of the third's, and so on - it doesn't add up to one, but it does add up to more than 1/3. Ditto if there's a 3/4 chance of a next round, or a 99% chance.

In the p=1/3 case, there is a 2/3 chance of lasting exactly 1 round, 2/9 of lasting exactly 2 rounds, 2/27 three rounds. This does add up to 1. It will always add up to 1.

We seem to be talking past each other. Yes, the total odds add up to 100%; but the sum of how important each individual round is, differs.

Let's say that the factor is 2/3. Then the first round contributes a total of 2/3 its nominal score to the expected value; the second round contributes (2/3)^2=4/9 of its score; and already that adds up to more than 1 - meaning that the effects of future rounds are more likely to outweigh the benefits of a defection-based strategy.

Okay, I understand the issue now, I think. So, summing up the effect of all the future rounds in exactly the way you are describing is something you would do to determine if grim trigger is an equilibrium strategy. (If you defect now, you get punished in ALL future rounds) However, in tit for tat, your punishment for defecting only lasts for 1 round, so you don't have to add all that up.

Hm. Seems nobody has pointed out yet that afterlife memes have as much negative as positive effects. The afterlife has been used to justify sacrifice for whatever earthly causes. The unavoidable example here being suicide bombers who might at least partly being motivated by heavenly rewards. Thus afterlive memes might just increase the spread of outcomes (which decreases stabilty).

[-]Shmi60

It seems that your use of afterlife is to encourage the precommitment to cooperate or tit-for-tat (i.e. to "behave morally", depending on your moral system). Another non-consequentialist way to do so is the concept of honor as a virtue. I'm sure there are other ways, too.

What you called "behave morally", I tend to think of in PD terms as 'being nice': not being the first to defect.

As a first thought, using honor as a virtue seems to be a way of replacing the ordinary set of rewards with a new scoring system - that is, valuing the honor of not being a thief over stealing a bunch of gold coins from an unlocked chest. I'm not entirely sure how to look at that in the evo-psych manner, how such an idea would arise, spread, and develop over time; but it seems like a workable alternative, for whatever portion of the population can be convinced being honorable is more important than the rewards from dishonorable behaviour.

[-]Shmi00

I suspect that honor-like traits evolve in pack animals, like a wolf who won a dominance fight being unable to attack the loser in a submissive position [citation needed].

A lot of people care about how the world looks after they die even if they don't believe in an afterlife.

A lot, yes; but a lot don't, either.

Also, present-day society is the result of millennia of previous societies which all included, at least implicitly, a supernatural afterlife-belief; likely going all the way back to Neanderthals burying their dead with red ochre. This leads to a somewhat different environment in which people develop their preferences than the environment in which an afterlife-belief originally formed, or developed its complications.

I do not think that there are a lot of people who do not care about the state of the world after they die. I think there are people who think they don't care because they believe that they are more selfish than they are.

I think the average Neanderthals did care about the fate of their children even after the point in which they themselves died.

If you were right, most muslim suicide bombers would be seniors.

I think people become less rational and more predictable as their fluid intelligence goes down with age.

Suicide attacks are an extreme end of a spectrum, which also includes 'brave young warriors posturing against the tribal enemy to show how fierce they are, and make them back down' and 'going off to war'. Young men are, in many respects and to a certain degree, a disposable resource - you don't /need/ that many men to keep your in-group's population high (especially if your culture explicitly allows for polygyny), and your overall in-group can gain certain benefits from spending that resource. Should those young men believe in an afterlife, they would be more willing to be spent, thus potentially increasing their in-group's overall success.

You're assuming that actual people's behavior and rational playing of the Prisoner's Dilemma correlate in a meaningful fashion. A popular assumption around here.

Your assumption may well not have a problem with young fanatical muslims becoming suicide attackers. It does have a problem with old fanatical muslims not doing so.

Gwern has a few interesting posts on this topic on his website: Terrorism is Not About Terror, Terrorism is Not Effective

You're assuming that actual people's behavior and rational playing of the Prisoner's Dilemma correlate in a meaningful fashion. A popular assumption around here.

I don't actually see that very often - mostly what I see is "if your 'rational' explanation doesn't correlate with what you'd actually do/think is right, then your 'rational' explanation is flawed."

In fact, if the whole game has a maximum length, then this chain of logic applies, leading to programs that are, in effect, always-defect.

You're describing the Unexpected Hanging paradox.

Also , the idea that the concept of afterlife leads to more responsible behavior in this life is a very well-trodden ground.

Re well-trodden ground: Well, yes, but as far as I can find, mainly in a vague, emperical, qualitative way. I'm looking for the /numbers/.

If you want numbers you need to define the problem more precisely.

The ground is well-trodden for humans the lives of which do not usually consist solely of playing PD with each other.

For PD tournaments there is no agreement on the optimal strategy in general (or, rather, it depends on what do your opponents do). If you want to see how changing the expected length of the game affects a particular strategy, you need to specify that particular strategy first.

What's the hypothesis here? That people have evolved an instinct to dream up afterlives because that allows them to cooperate? Or that afterlife memes are more fit because they allow people to cooperate? Or something else?

The second: that afterlife-memes allowed better cooperation; then, after becoming sufficiently widespread to be taken for granted, variants that depended on the basic idea but allowed even greater cooperation (eg, that He'll punished defectors) would similarly take root, spread, and become a new foundation.

Keep in mind that one of the central ideas of memetics is that memes do not necessarily benefit their hosts. Just as genes do not propagate for the sake of the survival of the species, neither do memes; the most successful memes are the ones that are best at propagating, not the ones that are best for their carriers.

[-]Ishaan-10

Sure, but at the same time, benefiting the host is certainly an effective method for a meme to propagate - possibly the most effective method. So if an idea tends to be beneficial, we aught to expect that it will spread, and when an idea spreads, we aught to suspect that it is beneficial.

I would say benefiting the host is not only not the most effective method for a meme to propagate, it's not even among the more effective methods. Memes that do well are ones which maximize their own representation in the population, and how a meme affects factors such as its host's health and productivity are going to be relatively minor factors compared to how it affects reproduction (observe fundamentalist religious memeplexes which increase their representation due to their carriers' high rate of childbirth,) how well it encourages transmission between carriers, and how effectively it achieves fixation in those it's transmitted to (which has much more to do with the quirks of our psychology than any sort of analysis of how the meme benefits us.)

Being beneficial to the host is about as important to the propagation of memes as it is to the propagation of bacterial and viral infestation, which is in fact mostly benign, and sometimes beneficial (we rely on some of the bacteria in our bodies for digestion, for instance, but not most of them.) In the case of bacteria or viruses, we have enough familiarity to dispense with the intuition that benefiting the host might be the most powerful factor in propagation.

It is much less important a mechanism of survival for memes to be beneficial to their host than genes, because memes can travel freely between hosts, and genes cannot, and also because the fitness of memes depends heavily on their ability to achieve fixation in the hosts they're transmitted to, while transmitted genes are not simply extinguished in their hosts.

(I'll add a caveat; usefulness to the host could dominate if the selection effects are strong enough; in a situation where most carriers survive and most non-carriers die without reproducing, a meme could become heavily-to-universally represented in a population even if it were not very effective at transmission. But such strong selection effects on memes are unlikely to occur in real life.)

I think that afterlife memes survived because people are benefited by being in a community, and religions used such beliefs to signal membership in their community. The reason those beliefs became the ones associated with the faith is because they made people feel better.

It's also possible that afterlife memes arose and spread due to one of the two benefits (increased cooperation, signalling), and then survived due to the other.

Toby Ord has some elderly C code (see Appendix II) that he used in his societal iterated prisoner's dilemma tournaments. You'd have to modify it for your purposes, but it's a small codebase.

http://intelligence.org/files/RobustCooperation.pdf

Especially now that this is published, I no longer feel much of a need to engage with the hypothesis that rational agents mutually defect in the oneshot or iterated PD. Perhaps you meant to analyze causal-decision-theory agents? But this would be of only academic interest.

Funny, when talking to Patrick at the workshop I made pretty much the opposite point. Maybe worth spelling it out here, since I came up with Lobian cooperation in the first place:

The PD over modal agents is just another game-theoretic problem. As the zoo of proposed modal agents grows, our failure to find a unique "rational" modal agent is a reflection of our inability to find a unique "rational" strategy in an arbitrary game. Waging war on an established result is typically a bad idea, we probably won't roll back the clock on game theory and reduce n-player to 1-player. This particular game is still worth investigating, but I don't hope to find any unique notion of rationality in there.

Without a unique notion of rationality, it seems premature to say that rational agents won't play a game in a certain way. Who knows what limitations they might have? For example, PrudentBot based on PA will defect against PrudentBot based on PA+1.

I've only had time to read the introduction so far; but if it's not mentioned in the paper itself, it seems that PrudentBot should not only be "correct" if it defects against CooperateBot, it should also defect against DefectBot. In fact, in a one-shot PD, it seems as if it should defect against any Bot which is unable to analyze its own source code to see how it will react.

It seems as if there's an important parallel between the Iterated Prisoner's Dilemma and the One-Shot Prisoner's Dilemma With Access To Source Code: both versions of the PD provide a set of evidence which each side can use to attempt to predict the other's behaviour. And since the PD-with-source is, according to the paper, equivalent to Newcomb's Problem, this suggests that the Iterated-PD is equivalent to a variant of Newcomb's based on reasonably-available historical evidence rather than Omega-level omniscience about the other player.

This also suggests that an important dividing line between algorithms one should defect against, and algorithms one should cooperate with, is somewhere around "complicated enough to be able to take my own actions into account when deciding its own actions". For PD-with-source, that means being complicated enough to analyze source code; Iterated-PD's structure puts that line at tit-for-tat.

This is also implying a certain intuitive leap to me, involving species with complicated enough social interactions to need to think about others' minds (parrots, dolphins, apes); that the runaway evolutionary process that led to our own species perhaps has to do with such mind-modeling finally becoming complicated enough to model one's own mind for higher-level social plots... but that's more likely than not just some college-freshmen-level "say, what if..." musing. It could be just as likely that the big step forward was minds becoming complicated enough to become less-predictable black-boxes than simple predictable "if I cheat on him and he catches me, he'll peck me painfully" call-and-responses.

[-]V_V-10

Actually, Prisoner's Dilemma between programs with each other source code (aka the program equilibrium setting) is a much different problem than both oneshot and iterated Prisoner's Dilemma.

The main attractiveness of both oneshot and iterated PD is that, despite their extreme simplicity, they provide a suprisingly good model of many common problems humans face in real-world social interactions.
On the other hand, Program PD is a much more artificial scenario, even in a speculative world of robots, or world of software agents. It is theoretical interesting to analyze that scenario, but even if an unabiguously satisfactory solution for it was found (and I think your paper doesn't, but I'll leave that for another post * ), it would be far fetched to claim that it would essentially solve all practical instances of PD.

( * ) Short story: PrudentBot cooperates too much. (PrudentBot, PrudentBot) is an unstable payoff-dominant Nash equilibrium. It's not even a Nash equilibrium under a small modification of the game that penalizes program complexity.
CliqueBots and generalized CliqueBots (bots that recognize each other with a criterion less strict than perfect textual equivalence but strict enough to guarantee functional equivalence) are better since they are stable payoff-dominant Nash equilibria and they never fail to exploit an exploitable opponent.

A related phenomenon, which I have encountered in life but not in systematic research, is that an exceptionally valuable turn is treated as a last turn, and someone will defect.  This was evident in at least two states during the tobacco lawsuits.  In Texas, the attorney general went to jail for cheating.  In Mississippi, where some relatives of mine were on the legal team, one of the lawyers tried to claim all the credit, to the extent they got involved in a separate lawsuit against each other, and felt more animosity than against the tobacco company lawyers (for whom it was not a last turn, they were planning to survive).  (The claimant later went to jail but not for that, for unrelated bribery).

I was doing a bit of web research on the last turn dilemma when I found your post.  The last turn dilemma is that it is rational to defect on the first turn in finite games, but human behavior is not consistent with that (exceptions for if game theory is explained, or players have high IQ and figure it out).  The puzzle is to explain why.

I believe the components of an answer may be present in existing research.  But not tied together.  A rather complex experiment would be required for verification which I'm not equipped to perform.  If any of you would like to discuss this, get in touch with me.  I don't really want to discuss a possible paper in a forum.  https://shulerresearch.wordpress.com/ (contact form)

I have a dream that one day, people will stop bringing up the (Iterated) Prisoner's Dilemma whenever decisions involve consequences. IPD is a symmetrical two-player game with known payouts, rational agents, and no persistent memory (in tournaments). Real life is something completely different, and equating TFT with superficially similar real life strategies is just plain wrong.

The possibility of the existence of immortality/afterlife/reincarnation certainly affects how people behave in certain situations, this is hardly a revelation. Running PD-like simulations with the intent to gain insight into real life behaviour of humans in society is a bad idea usually proposed by people who don't know much about game theory but like some of the terms commonly associated with PD.

Please stop using the words "cooperate" and "defect" as if they would in any way refer to comparable things in real life and PD. It will make you much less confused.

I don't have a problem with the proposition of adding uncertainty about the match length to IPD, and it is hardly a new idea. Just please don't talk about PD/IPD when you're talking about real life and vice versa, and don't make inferences about one based on the other.

Hm...

(Actually, what does TDT/UDT say about this? I imagine it's something like "always cooperate with other TDT users; for everyone else, tit-for-tat.")

That's the same as just tit-for-tat.

Maybe? Does tit-for-tat have the utility-wasting "defect on the last move" injunction?

Not by default. From what I've seen, though, few PD tourneys are arranged in such a way that the competing programs can know which move is the last, so there's less evidence available.

I just compared "always cooperate with other TDT users; for everyone else, tit-for-tat" to 'tit-for-tat' in the case that's what TDT says.

It depends on what you know about the person you are playing with. You do know something, even if it is just a probability distribution. Different distributions give you different strategies.

It occurs to me that, if this post is true, it may allow for non-theists to improve their social regard by a significant fraction of theists. Specifically, those theists who have adopted afterlife-ism sufficiently to say "But if you don't believe in [ $DEITY | $HELL | $HEAVEN ], then how can I trust you?"; and who are willing to accept members of other faiths who insert a different entry into the appropriate space rather than none at all. A transhumanist, singulatarian, or cryonicist might be able to say that they do have /something/ which is close enough to fit into the appropriate slot, thus raising their standing from 'untrustworthy barbarian' to 'semi-trustworthy member of not-my-religion'.

Given that there's a certain inferential gap between 'typically-believed afterlife' and 'plan on living forever in a non-supernatural way', it may be worthwhile to figure out a few useful catchphrases or short quotes, which express as much as possible as succinctly as possible, to be able to present the idea before the listener stops listening. Would it be worthwhile to start a new post in Discussion to solicit ideas for such?

A transhumanist, singulatarian, or cryonicist might be able to say that they do have /something/ which is close enough to fit into the appropriate slot, thus raising their standing from 'untrustworthy barbarian' to 'semi-trustworthy member of not-my-religion'.

But aren't cryonicists engaging in precisely the sort of defection that we are proposing that belief in the afterlife combats? At the end of their natural lives, they're using their power to pool resources unto themselves, rather than passing it on to the unambiguously living. Most people donate everything to children or charity after death. Cryonics is hardly effecting altruism.

Note - The above statement is not to be interpreted as opposition to cryonics.

It isn't so much the resources that are the point here (and even if they were, cryo is a lot cheaper than commonly believed), as the expectation that the game-of-life (or -of-afterlife) will continue to the indefinite future; the acknowledgement by the non-theist that any actions they take before their death will have consequences and resonances into the far-flung (possibly infinite) future; that they expect to face some sort of judgement for any wrongdoings they do, even if it's applied by advanced forensics and an earthly judiciary rather than a divine one.

In the real world, the effect might be moderated because people who are healthy enough to do damage (this doesn't need to be very healthy) aren't likely to know when they're going to die.

Perhaps; but few people seriously believe that they're going to live to be 150 years old; and as long as there is /some/ maximum length to a PD tourney, the reasoning from the last-possible-move would still seem to apply.

(Mostly unrelated personal prediction - this post gets voted down to -3 or so before being voted up to around +1. Reasoning: a lot of post-votes here seem to follow a curve resembling the Nike swoosh.)

I think you've probably influenced the experiment by making that comment.

To the extent that this is an experiment rather than a prediction, the effect of the statement /is/ the experiment. :)

As it is, we're down to -1 already; so the usual pattern seems to be forming up.

I seem to have underestimated the popularity of this post. There was, in fact, a sharp initial downbeat, which has been followed by a slower rise; but it didn't go down as far as I predicted, and it's gone up higher than I predicted. Same swoosh, different altitude.

[-]Thomas-20

In real life, you can do something even better than to defect every time.

You can cooperate, thus signaling to others, you are not a defector. Cooperating with them, you can do more than alone. More than a lone wolf, always defecting to everybody.

This is the reasoning behind tit-for-tat, and which allows it to succeed - when it does succeed.

However, in a crowd of always-defectors (such as a set created by reasoning from a known length-of-game), tit-for-tat's initial cooperation means it will have a slightly lower score than the crowd which always defects; tit-for-tat needs the cooperation of one or more other not-always-defectors to build up enough of an advantage amongst themselves to overcome that.

For such an ideal set of defectors, that's true.

But even there, you soon find two or more defectors from this always defect code. They start to cooperate between each other and outmaneuver the initial crowd.

Sidetrack: Villains by Necessity, a D&D-influenced novel in which the world has become so good that it's about to disappear in a burst of white light. A group of Evil characters gather to prevent the end of the world.

Unfortunately, the book isn't sophisticated about such questions as whether the Evil characters' increasing ability to cooperate with each other threatens to make them too Good to do their job.