Comment author:Kingreaper
22 July 2010 12:16:18AM
*
9 points
[-]

These are simply false comparisons.

Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.

Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.

Comment author:ata
02 January 2011 01:52:52AM
*
9 points
[-]

The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.

Comment author:bgaesop
04 January 2011 01:19:59AM
*
9 points
[-]

Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously not a legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.

If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?

Comment author:ata
07 January 2011 06:18:04AM
*
7 points
[-]

I don't understand how adding up utility is obviously a legitimate thing to do

To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism".

(The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)

Comment author:Will_Sawin
09 January 2011 11:46:59PM
0 points
[-]

In particular, VNM connects utility with probability, so we can use an argument based on probability.

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.

Now we analyze it from each person's perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

Comment author:bgaesop
15 January 2011 09:55:04PM
*
-4 points
[-]

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

That... just seems kind of crazy. Why would it be equally Good to have Hitler gain a bunch of utility as to have me, for example, gain that. Or to have a rich person who has basically everything they want gain a modest amount of utility, versus a poor person who is close to starvation gaining the same. If this latter example isn't taking into account your calibration person to person, could you give an example of what could be given to Dick Cheney that would be of equivalent Good as giving a sandwich and a job to a very hungry homeless person?

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

I for one would not prefer that, in most circumstances. This is why I would prefer definitely being given the price of a lottery ticket to playing the lottery (even assuming the lottery paid out 100% of its intake).

Comment author:Will_Sawin
15 January 2011 11:25:49PM
4 points
[-]

You can assume that people start equal. A rich person already got a lot of utility, while the poor person already lost some. You can still do the math that derives utilitarianism in the final utilities just fine.

Utility =/= Money. Under the VNM model I was using, utility is defined as the thing you are risk-neutral in. N units of utility is the thing which a 1/N chance of is worth the same as 1 unit of utility. So my statement is trivially true.

Let's say, in a certain scenario, each person i has utility ui. We define U to be the sum of all the ui, then by definition, each person is indifferent between having ui and having a ui/U chance of U and a (1-u_i)/U chance of 0. Since everyone is indifferent, this scenario is as good as the scenario in which one person, selected according to those probabilities, has U, and everyone else has 0. The goodness of such a scenario should be a function only of U.

Politics is the mind-killer, don't bring controversial figures such as Dick Cheney up.

The reason it is just to harm the unjust is not because their happiness is less valuable. It is because harming the unjust causes some to choose justice over injustice.

Comment author:bgaesop
16 January 2011 07:21:14PM
*
-1 points
[-]

Let's say, in a certain scenario, each person i has utility ui. We define U to be the sum of all the ui, then by definition, each person is indifferent between having ui and having a ui/U chance of U and a (1-u_i)/U chance of 0.

I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?

You can assume that people start equal.

I'm not sure I know what you mean by this. Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they're born into, what their genetics ended up gifting them with, things like that?

In my conception of my utility function, I place value on increasing not merely the overall utility, but the most common level of utility, and decreasing the deviation in utility. That is, I would prefer a world with 100 people each with 10 utility to a world with 99 people with 1 utility and 1 person with 1000 utility, even though the latter has a higher sum of utility. Is there something inherently wrong about this?

Comment author:Will_Sawin
16 January 2011 08:31:06PM
1 point
[-]

I am having a lot of trouble coming up with a real world example of something working out this way. Could you give one, please?

One could construct an extremely contrived real-world example rather trivially. A FAI has a plan that will make one person Space Emperor, with who it is depending on some sort of complex calculation. It is considering whether doing so would be a good idea or not.

The point is that a moral theory must consider such odd special cases. I can reformulate the argument to use a different strange scenario if you like, but the point isn't the specific scenario - it's the mathematical regularity.

Are you saying that we should imagine people are conceived with 0 utility and then get or lose a bunch based on the circumstances they're born into, what their genetics ended up gifting them with, things like that?

My argument is based on a mathematical intuition and can take many different forms. That comment came from asking you to accept that giving one person N utility is as good as giving another N utility, which may be hard to swallow.

So what I'm really saying is that all you need to accept is that, if we permute the utilities, so that instead of me having 10 and you 5, you have 10 and I 5, things don't get better or worse.

Starting at 0 is a red herring for which I apologize.

Is there something inherently wrong about this?

<people all have utility 10>

"Greetings, humans! I am a superintelligence with strange values, who is perfectly honest. In five minutes, I will randomly choose one of you and increase his/her utility to 1000. The others, however, will receive a utility of 1."

"My expected utility just increased from 10 to 10.99. I am happy about this!"

"So did mine! So am I"

etc........

<5 minutes later>

"Let's check the random number generator ... Bob wins. Sucks for the rest of you."

The super-intelligence has just, apparently, done evil, after making two decisions:

The first, everyone affected approved of

The second, in carrying out the consequences of a pre-defined random process, was undoubtedly fair - while those who lost were unhappy, they have no cause for complaint.

Comment author:roystgnr
13 March 2012 06:24:13AM
*
2 points
[-]

If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial.

In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question.

Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property.

But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?

## Comments (300)

Old*9 points [-]These are simply false comparisons.

Had Eliezer talked about torturing someone through the use of googelplex of dust specks, your comparison might have merit, but as is it seems to be deliberately missing the point.

Certainly, speaking for someone else is often inappropriate, and in this case is simple strawmanning.

I really don't see how his comparison is wrong. Could you explain in more depth, please

*9 points [-]The comparison is invalid because the torture and dust specks are being compared as negatively-valued ends in themselves. We're comparing U(torture one person for 50 years) and U(dust speck one person) * 3^^^3. But you can't determine whether to take 1 ml of water per day from 100,000 people or 10 liters of water per day from 1 person by adding up the total amount of water in each case, because water isn't utility.

*9 points [-]Perhaps this is just my misunderstanding of utility, but I think his point was this: I don't understand how adding up utility is obviously a legitimate thing to do, just like how you claim that adding up water denial is obviously

nota legitimate thing to do. In fact, it seems to me as though the negative utility of getting a dust speck in the eye is comparable to the negative utility of being denied a milliliter of water, while the negative utility of being tortured for a lifetime is more or less equivalent to the negative utility of dying of thirst. I don't see why it is that the one addition is valid while the other isn't.If this is just me misunderstanding utility, could you please point me to some readings so that I can better understand it?

*7 points [-]To start, there's the Von Neumann–Morgenstern theorem, which shows that given some basic and fairly uncontroversial assumptions, any agent with consistent preferences can have those preferences expressed as a utility function. That does not require, of course, that the utility function be simple or even humanly plausible, so it is perfectly possible for a utility function to specify that SPECKS is preferred over TORTURE. But the idea that doing an undesirable thing to n distinct people should be around n times as bad as doing it to one person seems plausible and defensible, in human terms. There's some discussion of this in The "Intuitions" Behind "Utilitarianism".

(The water scenario isn't comparable to torture vs. specks mainly because, compared to 3^^^3, 100,000 is approximately zero. If we changed the water scenario to use 3^^^3 also, and if we assume that having one fewer milliliter of water each day is a negatively terminally-valued thing for at least a tiny fraction of those people, and if we assume that the one person who might die of dehydration wouldn't otherwise live for an extremely long time, then it seems that the latter option would indeed be preferable.)

In particular, VNM connects utility with probability, so we can use an argument based on probability.

One person gaining N utility should be equally good no matter who it is, if utility is properly calibrated person-to-person.

One person gaining N utility should be equally good as one randomly selected person out of N people gaining N utility.

Now we analyze it from each person's perspective. They each have a 1/N chance of gaining N utility. This is 1 unit of expected utility, so they find it as good as surely gaining one unit of utility.

If they're all indifferent between one person gaining N and everyone gaining 1, who's to disagree?

*2 points [-]If you look at the assumptions behind VNM, I'm not at all sure that the "torture is worse than any amount of dust specks" crowd would agree that they're all uncontroversial.

In particular the axioms that Wikipedia labels (3) and (3') are almost begging the question.

Imagine a utility function that maps events, not onto R, but onto (R x R) with a lexicographical ordering. This satisfies completeness, transitivity, and independence; it just doesn't satisfy continuity or the Archimedian property.

But is that the end of the world? Look at continuity: if L is torture plus a dust speck (utility (-1,-1)). M is just torture (utility (-1,0)) and N is just a dust speck ((0,-1)), then must there really be a probability p such that pL + (1-p)N = M? Or would it instead be permissable to say that for p=1, torture plus dust speck is still strictly worse than torture, whereas for any p<1, any tiny probability of reducing the torture is worth a huge probabilty of adding that dust speck to it?

(edited to fix typos)