All of cja's Comments + Replies

cja00

You're right that insofar as the utility function of the my future self is the same as my current utility function, I should want to maximize the utility of my future self. But my point with that statement is precisely that one's future self can have very different interests that one's current self, as you said (hence, the heroin addict example EDIT: Just realized I deleted that from the prior post! Put back in at the bottom of this one!).

Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits. Insofar a... (read more)

3loqi
Mostly true, but Newcomb-like problems can muddy this distinction. No, it can't. If the same utility function can "evolve over time", it's got type (Time -> Outcome -> Utilons), but a utility function just has type (Outcome -> Utilons). Agreed. The same principle applies to the utility of future selves. No, it really doesn't. John age 18 has a utility function that involves John age 18 + 1 second, who probably has a similar utility function. Flipping the light grants both of them utility. I don't see how this follows. The importance of the heroin addict in my expected utility calculation reflects my values. Identity is (possibly) just another factor to consider, but it has no intrinsic special privilege. That may be, but your use of the word "utility" here is confusing the issue. The statement "I would rather" is your utility function. When you speak of "making the utiity of (b) slightly higher", then I think you can only be doing so because "he agrees with me on most everything, so I'm actually just directly increasing my own utility" or because "I'm arbitrarily dedicating X% of my utility function to his values, whatever they are".
cja20

That's why I threw in the disclaimer about needing some theory of self/identity. Possible future Phil's must bear a special relationship to the current Phil, which is not shared by all other future people--or else you lose egoism altogether when speaking about the future.

There are certainly some well thought out arguments that when thinking about your possible future, you're thinking about an entirely different person, or a variety of different possible people. But the more you go down that road, the less clear it is that classical decision theory has an... (read more)

1loqi
Sure, and when you actually do the expected utility calculation, you hold the utility function constant, regardless of who specifically is theoretically acting. For example, I can maximize my expected utility by sabotaging a future evil self. To do this, I have to make an expected utility calculation involving a future self, but my speculative calculation does not incorporate his utility function (except possibly as useful information). This maxim isn't at all clear to me to begin with. Maximizing your future self's utility is not the same as maximizing your current self's utility. The only time these are necessarily the same is when there is no difference in utility function between current and future self, but at that point you might as well just speak of your utility, period. If you and all your future selves possess the same utility function, you all by definition want exactly the same thing, so it makes no sense to talk about providing "more utility" to one future self than another. The decision you make carries exactly the same utility for all of you.
cja30

This is back to the original argument, and not on the definition of expected utility functions or the status of utilitiarianism in general.

PhilGoetz's argument appears to contain a contradiction similar to that which Moore discusses in Principia Ethica, where he argues that the principle egoism does not entail utilitarianism.

Egoism: X ought to do what maximizes X's happiness.
Utilitarianism: X ought to do what maximizes EVERYONE's happiness

(or put Xo for X. and X_sub_x for Everyone).

X's happiness is not logically equivalent to Everyone's happiness. The im... (read more)

1PhilGoetz
How is it different? Aren't all of the different possible future yous different people? In both cases you are averaging utility over many different individuals. It's just that in one case, all of them are copies of you.