timtyler comments on The Preference Utilitarian’s Time Inconsistency Problem - Less Wrong

25 Post author: Wei_Dai 15 January 2010 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread. Show more comments above.

Comment author: timtyler 15 January 2010 10:41:10PM 1 point [-]

You could be right. I can't see mention of "averaging" or "summing" in the definitions (which! it matters!) - and if any sum is to be performed it is vague about what class of entities is being summed over. However - as you say - Singer is a "sum" enthusiast. How you can measure "satisfaction" in a way that can be added up over multiple people is left as a mystery for readers.

I wouldn't assert the second paragraph, though. Satisfying preferences is still a moral philosophy - regardless of whether those preferences belong to an individual agent, or whether preference satisfaction is summed over a group.

Both concepts equally allow for agents with arbitrary preferences.

Comment author: mattnewport 15 January 2010 11:08:32PM 0 points [-]

The main Wikipedia entry for Utilitarianism says:

Utilitarianism is the idea that the moral worth of an action is determined solely by its utility in providing happiness or pleasure as summed among all people. It is thus a form of consequentialism, meaning that the moral worth of an action is determined by its outcome.

Utilitarianism is often described by the phrase "the greatest good for the greatest number of people", and is also known as "the greatest happiness principle". Utility, the good to be maximized, has been defined by various thinkers as happiness or pleasure (versus suffering or pain), although preference utilitarians define it as the satisfaction of preferences.

Where 'preference utilitarians' links back to the short page on preference utilitarianism you referenced. That combined with the description of Peter Singer as the most prominent advocate for preference utilitarianism suggests weighted summing or averaging, though I'm not clear whether there is some specific procedure associated with 'preference utilitarianism'.

Merely satisfying your own preferences is a moral philosophy but it's not utilitarianism. Ethical Egoism maybe or just hedonism. What appears to distinguish utilitarian ethics is that they propose a unique utility function that globally defines what is moral/ethical for all agents.