Change the problem and you change the solution.
If we assume that Eli and Clippy are both essentially self-modifying programs capable of verifiably publishing their own source codes, then indeed they can cooperate:
Eli modifies his own source code in such a way that he assures Clippy that his cooperation is contingent on Clippy's revealing his own source code and that the source code fulfills certain criteria, Clippy modifies his source code appropriately and publishes it.
Now each knows the other will cooperate.
But I think that although we in some ways resemble self-modifying computers, we cannot arbitrarily modify our own source codes, nor verifiably publish them. It's not at all clear to me that it would be a good thing if we could. Eliezer has constructed a scenario in which it would be favorable to be able to do so, but I don't think it would be difficult to construct a scenario in which it would be preferable to lack this ability.
Followup to: The True Prisoner's Dilemma
For everyone who thought that the rational choice in yesterday's True Prisoner's Dilemma was to defect, a follow-up dilemma:
Suppose that the dilemma was not one-shot, but was rather to be repeated exactly 100 times, where for each round, the payoff matrix looks like this:
As most of you probably know, the king of the classical iterated Prisoner's Dilemma is Tit for Tat, which cooperates on the first round, and on succeeding rounds does whatever its opponent did last time. But what most of you may not realize, is that, if you know when the iteration will stop, Tit for Tat is - according to classical game theory - irrational.
Why? Consider the 100th round. On the 100th round, there will be no future iterations, no chance to retaliate against the other player for defection. Both of you know this, so the game reduces to the one-shot Prisoner's Dilemma. Since you are both classical game theorists, you both defect.
Now consider the 99th round. Both of you know that you will both defect in the 100th round, regardless of what either of you do in the 99th round. So you both know that your future payoff doesn't depend on your current action, only your current payoff. You are both classical game theorists. So you both defect.
Now consider the 98th round...
With humanity and the Paperclipper facing 100 rounds of the iterated Prisoner's Dilemma, do you really truly think that the rational thing for both parties to do, is steadily defect against each other for the next 100 rounds?