Regarding preference utilitarianism, why can't the negative utility of not having a preference fulfilled be modelled with average or total utilitarianism? That is, aren't there some actions that create so much utility that they could overcome the negative utility of one's preference not being honored? I don't see why preference fulfillment should be first class next to pleasure and pain.
Sorry if this is off-topic, that was just my first reaction to reading this.
See here for some standard criticisms of hedonic (pleasure/pain based) utilitarianism.
Also see the discussions of wireheading on LW.
Incidentally, I should point out that in the economics and decision theory literature, "utility" is not a synonym for pleasure or some other psychological variable. It's merely a mathematical representation of revealed preferences (preferences which may be motivated by an ultimate desire for pleasure, but that's an additional substantive hypothesis). I tend to use "utility" in this sense, so just a terminological heads-up.
Thought of this after reading the discussion following abcd_z's post on utilitarianism, but it seemed sufficiently different that I figured I'd post it as a separate topic. It feels like the sort of thing that must have been discussed on this site before, but I haven't seen anything like it (I don't really follow the ethical philosophy discussions here), so pointers to relevant discussion would be appreciated.
Let's say I start off with some arbitrary utility function and I have the ability to arbitrarily modify my own utility function. I then become convinced of the truth of preference utilitarianism. Now, presumably my new moral theory prescribes certain terminal values that differ from the ones I currently hold. To be specific, my moral theory tells me to construct a new utility function using some sort of aggregating procedure that takes as input the current utility functions of all moral agents (including my own). This is just a way of capturing the notion that if preference utilitarianism is true, then my behavior shouldn't be directed towards the fulfilment of my own (prior) goals, but towards the maximization of preference satisfaction. Effectively, I should self-modify to have new goals.
But once I've done this, my own utility function has changed, so as a good preference utilitarian, I should run the entire process over again, this time using my new utility function as one of the inputs. And then again, and again... Let's look at a toy model. In this universe, there are two people: me (a preference utilitarian) and Alice (not a preference utilitarian). Let's suppose Alice does not alter her utility function in response to changes in mine. There are two exclusive states of affairs that can be brought about in this universe: A and B. Alice assigns a utility of 10 to A and 5 to B, I initially assign a utility of 3 to A and 6 to B. Assuming the correct way to aggregate utility is by averaging, I should modify my utilities to 6.5 for A and 5.5 for B. Once I have done this, I should again modify to 8.25 for A and 5.25 for B. Evidently, my utility function will converge towards Alice's.
I haven't thought about this at all, but I think the same convergence will occur if we add more utilitarians to the universe. If we add more Alice-type non-utilitarians there is no guarantee of convergence. So anyway, this seems to me a pretty strong argument against utilitarianism. If we have a society of perfect utilitarians, a single defector who refuses to change her utility function in response to changes in others' can essentially bend the society to her will, forcing (through the power of moral obligation!) everybody else to modify their utility functions to match hers, no matter what her preferences actually are. Even if there are no defectors, all the utilitarians will self-modify until they arrive at some bland (value judgment alert) middle ground.
Now that I think about it, I suspect this is basically just a half-baked corollary to Bernard Williams' famous objection to utilitarianism:
Anyway, I'm sure ideas of this sort have been developed much more carefully and seriously by philosophers, or even other posters here at LW. As I said, any references would be greatly appreciated.