A mild defense of PDU:
If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."
The other could also reply: "You say now that you would die because it gives you pleasure now to think of yourself as the sort of person who would die to save others. Moreover, if you do someday actually sacrifice yourself for others, it would be because the disutility of shattering your self-perception would seem to outweigh (in that moment) the disutility of dying."
(And now we have come back yet again to Newcomb, it seems.)
A counterexample to the claim "psychologically normal humans (implicitly) have a utility function that looks something like a PDU function":
Your best friend is deathly ill. I give you a choice between Pill A and Pill B.
If you choose Pill A and have your friend swallow it, he will heal - but he will release a pheromone that will leave you convinced for the rest of your life that he died (and you won't interact with him ever again).
If you choose Pill B and swallow it, your friend will die - but you will be convinced for the rest of your life that...
I don't think this post adequately distinguishes between two concepts: how does the human utility function actually work, and how should it work.
The answer to the first question is (I thought people here agreed) that humans weren't actually utility maximizers; this makes things like your descriptive argument against perceptive determinism unnecessary and a lot of your wording misleading.
The second question is: if we're making some artificial utility function for an AI or just to prove a philosophical point, how should that work - and I think your answer is...
When I read "PD" here I automatically think "prisoner's dilemma", no matter how many times I go back and reread "perceptual determinism".
ETA: thanks
But this isn't the most general possible way to calculate utilities.
The first diagram doesn't actually lack generality - since extrapolating the future could just be moved into the utility function.
Edit: Probably skip to the *, I suspect my original writing was unclear.
This seems to use two different definitions of utility. If utility is defined as direct perceptual experience, the argument fails. If utility is defined more broadly, it does not. If my current utility is determined entirely perceptually, it does not follow that I should try to assess my future utility more holistically.
The real question seems to be whether the broader definition of utility actually accounts for how we feel, how we live life, or what we actually maximize for.
*Edit: I m...
I would summarize this post as, "Some people claim that the argument to a utility function must be a state of mind. However, a state of the universe is more general than a state of mind [for a certain meaning of 'general' that reminds me of Haskell's monads]. Therefore, the argument to a utility function need not be a state of mind." Unfortunately, this is a non sequitur, and the post doesn't seem to have any redeeming qualities other than this argument.
Unofficial Followup to: Fake Selfishness, Post Your Utility Function
A perception-determined utility function is one which is determined only by the perceptual signals your mind receives from the world; for instance, pleasure minus pain. A noninstance would be number of living humans. There's an argument in favor of perception-determined utility functions which goes like this: clearly, the state of your mind screens off the state of the outside world from your decisions. Therefore, the argument to your utility function is not a world-state, but a mind-state, and so, when choosing between outcomes, you can only judge between anticipated experiences, and not external consequences. If one says, "I would willingly die to save the lives of others," the other replies, "that is only because you anticipate great satisfaction in the moments before death - enough satisfaction to outweigh the rest of your life put together."
Let's call this dogma perceptually determined utility. PDU can be criticized on both descriptive and prescriptive grounds. On descriptive grounds, we may observe that it is psychologically unrealistic for a human to experience a lifetime's worth of satisfaction in a few moments. (I don't have a good reference for this, but) I suspect that our brains count pain and joy in something like unary, rather than using a place-value system, so it is not possible to count very high.
The argument I've outlined for PDU is prescriptive, however, so I'd like to refute it on such grounds. To see what's wrong with the argument, let's look at some diagrams. Here's a picture of you doing an expected utility calculation - using a perception-determined utility function such as pleasure minus pain.
Here's what's happening: you extrapolate several (preferably all) possible futures that can result from a given plan. In each possible future, you extrapolate what would happen to you personally, and calculate the pleasure minus pain you would experience. You call this the utility of that future. Then you take a weighted average of the utilities of each future — the weights are probabilities. In this way you calculate the expected utility of your plan.
But this isn't the most general possible way to calculate utilities.
Instead, we could calculate utilities based on any properties of the extrapolated futures — anything at all, such as how many people there are, how many of those people have ice cream cones, etc. Our preferences over lotteries will be consistent with the Von Neumann-Morgenstern axioms. The basic error of PDU is to confuse the big box (labeled "your mind") with the tiny boxes labeled "Extrapolated Mind A," and so on. The inputs to your utility calculation exist inside your mind, but that does not mean they have to come from your extrapolated future mind.
So that's it! You're free to care about family, friends, humanity, fluffy animals, and all the wonderful things in the universe, and decision theory won't try to stop you — in fact, it will help.
Edit: Changed "PD" to "PDU."