In the not too distant past, people thought that our universe might be capable of supporting an unlimited amount of computation. Today our best guess at the cosmology of our universe is that it stops being able to support any kind of life or deliberate computation after a finite amount of time, during which only a finite amount of computation can be done (on the order of something like 10^120 operations).
Consider two hypothetical people, Tom, a total utilitarian with a near zero discount rate, and Eve, an egoist with a relatively high discount rate, a few years ago when they thought there was .5 probability the universe could support doing at least 3^^^3 ops and .5 probability the universe could only support 10^120 ops. (These numbers are obviously made up for convenience and illustration.) It would have been mutually beneficial for these two people to make a deal: if it turns out that the universe can only support 10^120 ops, then Tom will give everything he owns to Eve, which happens to be $1 million, but if it turns out the universe can support 3^^^3 ops, then Eve will give $100,000 to Tom. (This may seem like a lopsided deal, but Tom is happy to take it since the potential utility of a universe that can do 3^^^3 ops is so great for him that he really wants any additional resources he can get in order to help increase the probability of a positive Singularity in that universe.)
You and I are not total utilitarians or egoists, but instead are people with moral uncertainty. Nick Bostrom and Toby Ord proposed the Parliamentary Model for dealing with moral uncertainty, which works as follows:
Suppose that you have a set of mutually exclusive moral theories, and that you assign each of these some probability. Now imagine that each of these theories gets to send some number of delegates to The Parliament. The number of delegates each theory gets to send is proportional to the probability of the theory. Then the delegates bargain with one another for support on various issues; and the Parliament reaches a decision by the delegates voting. What you should do is act according to the decisions of this imaginary Parliament.
It occurred to me recently that in such a Parliament, the delegates would makes deals similar to the one between Tom and Eve above, where they would trade their votes/support in one kind of universe for votes/support in another kind of universe. If I had a Moral Parliament active back when I thought there was a good chance the universe could support unlimited computation, all the delegates that really care about astronomical waste would have traded away their votes in the kind of universe where we actually seem to live for votes in universes with a lot more potential astronomical waste. So today my Moral Parliament would be effectively controlled by delegates that care little about astronomical waste.
Wei, insofar as you are making the deal with yourself consider that in the world in which it turns out that the universe could support doing at least 3^^^3 ops you may not be physically capable of changing yourself to work more toward longtermist goals than you would otherwise. (I.e. Human nature is such that making huge sacrifices to your standard of living and quality of life negatively effects your ability to work productively on longtermist goals for years.) If this is the case, then the deal won't work since one part of you can't uphold the bargain. So in the world in which it turns out that the universe can support only 10^120 ops you should not devote less effort to longtermism than you would otherwise, despite being physically capable of devoting less effort.
In a related kind of deal, both parts of you may be capable of upholding the deal, in which case I think such deals may be valid. But it seems to me that you don't need UDT-like reasoning and the deal future to believe that your future self with better knowledge of the size of the cosmic endowment ought to change his behavior in the same way as implied by the deal argument. Example: If you're a philanthropist with a plan to spend $X of your wealth on shortermist philanthropy and $X on longtermist-philanthropy when you're initially uncertain about the size of the cosmic endowment because you think this is optimal given your current beliefs and uncertainty, then when you later find out that the universe can support 3^^^3 ops I think this should cause you to shift how you spend your $2X to give more toward longtermist philanthropy just because the longtermist philanthropic opportunities now just seem more valuable. Similarly, if you find out that the universe can only support 10^120, then you ought to update to giving more toward short-termist philanthropy.
So is there really a case for UDT-like reasoning plus hypothetical deals our past selves could have made with themselves suggesting that we ought to behave differently than more common reasoning suggests we ought to behave when we learn new things about the world? I don't see it.