Unofficial Followup to: Fake Selfishness, Post Your Utility Function
A perception-determined utility function is one which is determined only by the perceptual signals your mind receives from the world; for instance, pleasure minus pain. A noninstance would be number of living humans. There's an argument in favor of perception-determined utility functions which goes like this: clearly, the state of your mind screens off the state of the outside world from your decisions. Therefore, the argument to your utility function is not a world-state, but a mind-state, and so, when choosing between outcomes, you can only judge between anticipated experiences, and not external consequences. If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."
Let's call this dogma perceptually determined utility. PDU can be criticized on both descriptive and prescriptive grounds. On descriptive grounds, we may observe that it is psychologically unrealistic for a human to experience a lifetime's worth of satisfaction in a few moments. (I don't have a good reference for this, but) I suspect that our brains count pain and joy in something like unary, rather than using a place-value system, so it is not possible to count very high.
The argument I've outlined for PDU is prescriptive, however, so I'd like to refute it on such grounds. To see what's wrong with the argument, let's look at some diagrams. Here's a picture of you doing an expected utility calculation - using a perception-determined utility function such as pleasure minus pain.
Here's what's happening: you extrapolate several (preferably all) possible futures that can result from a given plan. In each possible future, you extrapolate what would happen to you personally, and calculate the pleasure minus pain you would experience. You call this the utility of that future. Then you take a weighted average of the utilities of each future — the weights are probabilities. In this way you calculate the expected utility of your plan.
But this isn't the most general possible way to calculate utilities.
Instead, we could calculate utilities based on any properties of the extrapolated futures — anything at all, such as how many people there are, how many of those people have ice cream cones, etc. Our preferences over lotteries will be consistent with the Von Neumann-Morgenstern axioms. The basic error of PDU is to confuse the big box (labeled "your mind") with the tiny boxes labeled "Extrapolated Mind A," and so on. The inputs to your utility calculation exist inside your mind, but that does not mean they have to come from your extrapolated future mind.
So that's it! You're free to care about family, friends, humanity, fluffy animals, and all the wonderful things in the universe, and decision theory won't try to stop you — in fact, it will help.
Edit: Changed "PD" to "PDU."
I agree with the substance of everything you have just said, and maintain that the only real point on which we disagree is whether the standard technical usage of "utility function" allows the choice set to be considered as part of the state description.
Anything else you want to include, go for it. But I maintain that, while it is clearly formally possible to include the choice set in in the state description, this is not part of standard usage, and therefore, your objection to Cyan's original comment (which is a well-established result based on the standard usage) was misplaced.
I have no substantive problem in principle with including choice sets in the state description; maybe the broader definition of "utility function" that encompasses this is even a "better" definition.
ETA: The last sentence of this comment previously said something like "but I'm not sure what you gain by doing so". I thought I had managed to edit it before anyone would have seen it, but it looks like Tim's response below was to that earlier version.
ETA2: On further reflection, I think it's the standard definition of transitive in this context that excludes the choice set from the state description, not the definition of utility function. Which I think basically gets me to where Cyan was some time ago.
You get to model humans with a utility function for one thing. Modelling human behaviour is a big part of point of utilitarian models - and human decisions really do depend on the range choices they are given in a weird way that can't be captured without this information.
Also, the formulation is neater. You get to write u(state) - instead of u(state - minus a bunch of things which are to be ignored).