If you have an imperfect memory and you think they don't, wouldn't you want to pre-commit to attempting co-operation with any immortal entities you face, given they are very likely to remember you, even if you don't remember them? This is of course assuming that most or all other immortal entities you're likely to face in the Dilemma do in fact have perfect memories.
If you can't remember, and they can work that out, then they can defect on you every time and get more points, at no penalty other than making you less and less optimistic about cooperation with rarely-encountered entities.
That could eventually cut into their profits, but it becomes a tragedy of the commons, with you being the commons.
I’m sure many others have put much more thought into this sort of thing -- at the moment, I’m too lazy to look for it, but if anyone has a link, I’d love to check it out.
Anyway, I ran into some interesting musings on game theory for immortal agents and I thought it was interesting enough to talk about.
Cooperation in games like the iterated Prisoner’s Dilemma is partly dependent on the probability of encountering the other player again. Axelrod (1981) gives the payoff for a sequence of 'cooperate's as R/(1-p) where R is the payoff for cooperating, and p is a discount parameter that he takes as the probability of the players meeting again (and recognizing each other, etc.). If you assume that both players continue playing for eternity in a randomly mixing, finite group of other players, then the probability of encountering the other player again approaches 1, and the payoff for an extended period of cooperation approaches infinity.
So, take a group of rational, immortal agents, in a prisoner’s dilemma game. Should we expect them to cooperate?
I realize there is no optimal strategy without reference to the other players’ strategies, and that the universe is not actually infinite in time, so this is not a perfect model on at least two counts, but I wanted to look at the simple case before adding complexities.