roystgnr comments on Circular Altruism - Less Wrong

40 Post author: Eliezer_Yudkowsky 22 January 2008 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (300)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: ata 07 January 2011 06:18:04AM *  7 points [-]

I don't understand how adding up utility is obviously a legitimate thing to do

To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism".

(The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)

Comment author: roystgnr 13 March 2012 06:24:13AM *  2 points [-]

If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial.

In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question.

Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property.

But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?

(edited to fix typos)