That's why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil's must bear a special relationship to the current Phil, which is not shared by all other future people--or else you lose egoism altogether when speaking about the future.
There are certainly some well thought out arguments that when thinking about your possible future, you're thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has an...
This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.
PhilGoetz's argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.
Egoism: X ought to do what maximizes X's happiness.
Utilitarianism: X ought to do what maximizes EVERYONE's happiness
(or put Xo for X. and X_sub_x for Everyone).
X's happiness is not logically equivalent to Everyone's happiness. The im...
You're right that insofar as the utility function of the my future self is the same as my current utility function, I should want to maximize the utility of my future self. But my point with that statement is precisely that one's future self can have very different interests that one's current self, as you said (hence, the heroin addict example EDIT: Just realized I deleted that from the prior post! Put back in at the bottom of this one!).
Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits. Insofar a... (read more)