You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ArisKatsaris comments on Skirting the mere addition paradox - Less Wrong Discussion

3 Post author: Stuart_Armstrong 18 November 2013 05:50PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (25)

You are viewing a single comment's thread. Show more comments above.

Comment author: ArisKatsaris 19 November 2013 11:08:17PM *  0 points [-]

It is also worth noting that average utilitarianism has also its share of problems: killing off anyone with below-maximum utility is an improvement.

If one averages across all time, the strong preference of people to not be killed would suffice to more than cancel the benefits of their non-participation in the future averages.

Comment author: V_V 20 November 2013 04:36:07PM 0 points [-]

I don't know what you mean by "average across time". You typically discount across time.
Anyway, utilitarianism is a form of consequentialism in that it assigns moral preferences to world states rather than transitions. Being killed is a transition in any description at any meaningful level of abstraction, hence you can't assign an utility to it. If you do, then you have an essentially dentological ethics, not utilitarianism.

Comment author: ArisKatsaris 20 November 2013 07:34:25PM *  0 points [-]

I don't know what you mean by "average across time"

I mean calculating the average utility of the whole timeline, not of particular discrete moments in time.

An example. Let's say we're in the year 2020 and considering whether it's cool to murder 7 billion people in order to let a person-of-maximum-utility lead an optimal life from 2021 onwards. By utility in this case I mean "satisfaction of preferences" (preference utilitarianism) rather than "happiness".

If we do so, a calculation that treats 2020 and 2021 as separate "worlds" might say "If 7 billion people are killed, 2021 will have a much higher average utility than 2020, so we should do it in order to transit to the world of 2021"

But I'd calculate it differently: If 7 billion people are killed between 2020 and 2021, the people of 2020 have far less utility because they very strongly prefer to not be killed, and their killings would therefore grossly reduce the satisfaction of their preferences. Therefore the average utility in the timeline as a whole would be vastly reduced by their murders.

Anyway, utilitarianism is a form of consequentialism in that it assigns moral preferences to world states rather than transitions

One just needs treat 'world-states' 4-dimensionally, as 'timeline-states'...

Comment author: Stuart_Armstrong 21 November 2013 11:27:04AM *  1 point [-]

If you could genetically modify future humans to make them indifferent to being killed, would you do that, since it would facilitate the mass murder?

Comment author: Stuart_Armstrong 20 November 2013 10:46:57AM 0 points [-]

Rather than counting on other factors (people's preferences) to avoid outcomes we feel are bad, I think it would be better to encode the badness of these outcomes directly.