A common background assumption on LW seems to be that it's rational to act in accordance with the dispositions one would wish to have. (Rationalists must WIN, and all that.)
E.g., Eliezer:
It is, I would say, a general principle of rationality - indeed, part of how I define rationality - that you never end up envying someone else's mere choices. You might envy someone their genes, if Omega rewards genes, or if the genes give you a generally happier disposition. But [two-boxing] Rachel, above, envies [one-boxing] Irene her choice, and only her choice, irrespective of what algorithm Irene used to make it. Rachel wishes just that she had a disposition to choose differently.
And more recently, from AdamBell:
I [previously] saw Newcomb’s Problem
... (read 507 more words →)
It's not unusual to count "thwarted aims" as a positive bad of death (as I've argued for myself in my paper Value Receptacles), which at least counts against replacing people with only slightly happier people (though still leaves open that it may be worthwhile to replace people with much happier people, if the extra happiness is sufficient to outweigh the harm of the first person's thwarted ends).