loqi comments on Average utilitarianism must be correct? - Less Wrong

2 Post author: PhilGoetz 06 April 2009 05:10PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (159)

You are viewing a single comment's thread. Show more comments above.

Comment author: loqi 07 April 2009 04:16:18PM 3 points [-]

Many (or arguably most) actions we perform can be explained (rationally) only in terms of future benefits.

Mostly true, but Newcomb-like problems can muddy this distinction.

There is only one utility function, though it might evolve over time

No, it can't. If the same utility function can "evolve over time", it's got type (Time -> Outcome -> Utilons), but a utility function just has type (Outcome -> Utilons).

Unless you establish an utilitarian or altruistic rational norm, etc., the principles of reason do not straightforwardly tell us to maximize other peoples utilities.

Agreed. The same principle applies to the utility of future selves.

It really breaks down if John age 18 + 1 second is not the same as John age 18.

No, it really doesn't. John age 18 has a utility function that involves John age 18 + 1 second, who probably has a similar utility function. Flipping the light grants both of them utility.

Insofar as you question whether or not the heroin addict in (a) counts as yourself, you should minimize the importance of his fate in your expected utility calculation.

I don't see how this follows. The importance of the heroin addict in my expected utility calculation reflects my values. Identity is (possibly) just another factor to consider, but it has no intrinsic special privilege.

I would rather make the utility of myself in (b) slightly higher, even at the risk of making the utility of the person in (a) significantly lower.

That may be, but your use of the word "utility" here is confusing the issue. The statement "I would rather" is your utility function. When you speak of "making the utiity of (b) slightly higher", then I think you can only be doing so because "he agrees with me on most everything, so I'm actually just directly increasing my own utility" or because "I'm arbitrarily dedicating X% of my utility function to his values, whatever they are".