Viliam_Bur comments on Open thread, September 8-14, 2014 - Less Wrong

5 Post author: polymathwannabe 08 September 2014 12:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (295)

You are viewing a single comment's thread. Show more comments above.

Comment author: mare-of-night 10 September 2014 05:41:13AM *  1 point [-]

I think it depends? People around here use utilitarianism to mean a few different things. I imagine that's the version talked about the most because the people involved in EA tend to be those types (since it's easier to get extra value via hacking if your most important values are something very specific and somewhat measurable). I think that might also be the usual philosopher's definition. But then Eliezer (in the metaethics sequence) used "utilitarianism" to mean a general approach to ethics where you add up all the values involved and pick the best outcome, regardless of what your values are and how you weight them. So it's sometimes a little confusing to know what utilitarianism means around here.

(Edited for spelling.)

Comment author: Viliam_Bur 12 September 2014 08:18:03AM 2 points [-]

People around here use utilitarianism to mean a few different things.

I don't understand. One of those things is "compare the options, and choose the one with the best consequences". What are the other things?

Comment author: Lumifer 12 September 2014 03:04:19PM 4 points [-]

One of those things is "compare the options, and choose the one with the best consequences".

You are illustrating the issue :-) That is consequentialism, not utilitarianism.

Comment author: pragmatist 12 September 2014 06:47:42PM *  2 points [-]

Differences arise when you try to flesh out what "best consequences" means. A lot of people on this site seem to think utilitarianism interprets "best consequences" as "best consequences according to your own utility function". This is actually not what ethicists mean when they talk about utilitarianism. They might mean something like "best consequences according to some aggregation of the utility functions of all agents" (where there is disagreement about what the right aggregation mechanism is or what counts as an agent). Or they might interpret "best consequences" as "consequences that maximize the aggregate pleasure experienced by agents" (usually treating suffering as negative pleasure). Other interpretations also exist.

Comment author: Nornagest 12 September 2014 07:35:21PM *  1 point [-]

As far as I've read, preference utilitarianism and its variants are about the only well-known systems of utilitarianism in philosophy that try to aggregate the utility functions of agents. Trying to come up with a universally applicable utility function seems to be more common; that's what gets you hedonistic utilitarianism, prioritarianism, negative utilitarianism, and so forth. Other variants, like rule or motive utilitarianism, might take one of the above as a basis but be more concerned with implementation difficulties.

I agree that the term tends to be used too broadly around here -- probably because the term sounds like it points to something along the lines of "an ethic based on evaluating a utility function against options", which is actually closer to a working definition of consequentialism. It's not a word that's especially well defined, though, even in philosophy.

Comment author: mare-of-night 12 September 2014 04:52:44PM 2 points [-]

"Compare the options, and choose the one that results in the greatest (pleasure - suffering)."