Unofficial Followup to: Fake Selfishness, Post Your Utility Function
A perception-determined utility function is one which is determined only by the perceptual signals your mind receives from the world; for instance, pleasure minus pain. A noninstance would be number of living humans. There's an argument in favor of perception-determined utility functions which goes like this: clearly, the state of your mind screens off the state of the outside world from your decisions. Therefore, the argument to your utility function is not a world-state, but a mind-state, and so, when choosing between outcomes, you can only judge between anticipated experiences, and not external consequences. If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."
Let's call this dogma perceptually determined utility. PDU can be criticized on both descriptive and prescriptive grounds. On descriptive grounds, we may observe that it is psychologically unrealistic for a human to experience a lifetime's worth of satisfaction in a few moments. (I don't have a good reference for this, but) I suspect that our brains count pain and joy in something like unary, rather than using a place-value system, so it is not possible to count very high.
The argument I've outlined for PDU is prescriptive, however, so I'd like to refute it on such grounds. To see what's wrong with the argument, let's look at some diagrams. Here's a picture of you doing an expected utility calculation - using a perception-determined utility function such as pleasure minus pain.
Here's what's happening: you extrapolate several (preferably all) possible futures that can result from a given plan. In each possible future, you extrapolate what would happen to you personally, and calculate the pleasure minus pain you would experience. You call this the utility of that future. Then you take a weighted average of the utilities of each future — the weights are probabilities. In this way you calculate the expected utility of your plan.
But this isn't the most general possible way to calculate utilities.
Instead, we could calculate utilities based on any properties of the extrapolated futures — anything at all, such as how many people there are, how many of those people have ice cream cones, etc. Our preferences over lotteries will be consistent with the Von Neumann-Morgenstern axioms. The basic error of PDU is to confuse the big box (labeled "your mind") with the tiny boxes labeled "Extrapolated Mind A," and so on. The inputs to your utility calculation exist inside your mind, but that does not mean they have to come from your extrapolated future mind.
So that's it! You're free to care about family, friends, humanity, fluffy animals, and all the wonderful things in the universe, and decision theory won't try to stop you — in fact, it will help.
Edit: Changed "PD" to "PDU."
A counterexample to the claim "psychologically normal humans (implicitly) have a utility function that looks something like a PDU function":
Your best friend is deathly ill. I give you a choice between Pill A and Pill B.
If you choose Pill A and have your friend swallow it, he will heal - but he will release a pheromone that will leave you convinced for the rest of your life that he died (and you won't interact with him ever again).
If you choose Pill B and swallow it, your friend will die - but you will be convinced for the rest of your life that he has fully healed, and is just on a different planet or something. From time to time you will hallucinate pleasant conversations with him, and will never be the wiser.
No, you can't have both pills. Presumably you will choose Pill A. You do not (only) desire to be in a state of mind where you believe your friend is healthy. You desire that your friend be healthy. You seek the object of your desire, not the state of mind produced by the object of your desire.
My brain has this example tagged as “similar to but not the same as something I’ve read”, but tell me if this is stolen.
If I can't distinguish my hallucinations from the real person, then as per the Generalized Anti-Zombie Principle the hallucinations are just as sapient as himself.