I've had a thought I don't recall having encountered before described quite this way; but, given my past experiences with such thoughts, and the fact that it involves evo-psych, I currently peg my confidence in this idea at around 10%. But just in case this particular idea rose to my attention out of all the other possible ideas that didn't for a reason, I'll post it here.
One of the simpler analyses of the Prisoner's Dilemma points out that if you know that the round you're facing is the last round, then there's no reason not to defect; your choice no longer has any influence over future rounds, and whatever your opponent does, you gain a higher score by defecting on this particular round than by cooperating. Thus, any rational algorithm which is attempting to maximize its score, and can identify which round is the last round, will gain a higher score by adding a codicil to defect on the last round.
Expanding that idea implies that if such a "rational" algorithm is facing other seemingly rational algorithms, it will assume that they will also defect on the last round; and thus, such an algorithm faced with the /second/-last round will be able to assume that its actions will have no influence on the actions of the last round; and, by a similar logic, will choose to defect on the second-last round; and the third-last; and so forth. In fact, if the whole game has a maximum length, then this chain of logic applies, leading to programs that are, in effect, always-defect. Cooperative strategies such as tit-for-tat thus tend to arise when the competing algorithms lack a particular piece of information: the length of the game they are playing.
Depending on where a person is born and lives (and various other details), they have roughly a fifty percent chance of living to 80 years of age, a one-in-a-million chance of making it to 100 years, and using LaPlace's Sunrise Formula, somewhere under one-in-a-hundred-billion odds of making it to 130 years. If a person assumes that their death is the end of them, then they have a very good idea of what their maximum lifespan will be; and depending on how rational they are, they could follow a similar line of reasoning to the above and plan their actions around an "always defect" style of morality. (Eg, stealing whenever the profit outweighs the risk times the punishment.)
However, introducing even an extremely vague concept of an afterlife, even if it's only that some form of individuality survives and can continue to interact with someone, means that there is no surety about when the 'game' will end - and, thus, can nudge people to act cooperatively, even when there is no physical chance of getting caught at defecting. Should this general approach spread widely enough, then further refinements could be made which increase cooperative behaviour further, such as reports on what the scoring system of the afterlife portion of the 'game' are; thus increasing in-group cooperative behaviour yet further.
Interestingly, this seems to apply whether the post-mortal afterlife is supernatural in nature, or takes the form of a near-term technological singularity, or a cryonicist who estimates a 5% chance of revival within a millennium.
What I would like to try to find out is which shapes of lifespan estimation lead to what forms of PD algorithms predominating. For example, a game with a 50% chance of continuing on any turn after turn 100, versus one with a 95% chance every turn, versus one with a straight 5% chance of being effectively infinite. If anyone reading this already has a set of software allowing for customized PD tournaments, I'd like to get in touch. Anyone else, I'd like whatever constructive criticism you can offer, from any previous descriptions of this - preferably with hard figures and numbers backing it up - to improvements that bring the general concept more into line with reality.
It seems that your use of afterlife is to encourage the precommitment to cooperate or tit-for-tat (i.e. to "behave morally", depending on your moral system). Another non-consequentialist way to do so is the concept of honor as a virtue. I'm sure there are other ways, too.
What you called "behave morally", I tend to think of in PD terms as 'being nice': not being the first to defect.
As a first thought, using honor as a virtue seems to be a way of replacing the ordinary set of rewards with a new scoring system - that is, valuing the honor of not being a thief over stealing a bunch of gold coins from an unlocked chest. I'm not entirely sure how to look at that in the evo-psych manner, how such an idea would arise, spread, and develop over time; but it seems like a workable alternative, for whatever portion of the population can be convinced being honorable is more important than the rewards from dishonorable behaviour.