Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

bgaesop comments on Circular Altruism - Less Wrong

40 Post author: Eliezer_Yudkowsky 22 January 2008 06:00PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (300)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: bgaesop 02 January 2011 01:27:28AM 4 points [-]

I really don't see how his comparison is wrong. Could you explain in more depth, please

Comment author: ata 02 January 2011 01:52:52AM *  9 points [-]

The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.

Comment author: bgaesop 04 January 2011 01:19:59AM *  9 points [-]

Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.

If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?

Comment author: ata 07 January 2011 06:18:04AM *  7 points [-]

I don't understand how adding up utility is obviously a legitimate thing to do

To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism".

(The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)

Comment author: Will_Sawin 09 January 2011 11:46:59PM 0 points [-]

In particular, VNM connects utility with probability, so we can use an argument based on probability.

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.

Now we analyze it from each person's perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

Comment author: roystgnr 13 March 2012 06:24:13AM *  2 points [-]

If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial.

In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question.

Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property.

But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?

(edited to fix typos)