Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Torture vs. Dust Specks

35 Post author: Eliezer_Yudkowsky 30 October 2007 02:50AM

"What's the worst that can happen?" goes the optimistic saying.  It's probably a bad question to ask anyone with a creative imagination.  Let's consider the problem on an individual level: it's not really the worst that can happen, but would nonetheless be fairly bad, if you were horribly tortured for a number of years.  This is one of the worse things that can realistically happen to one person in today's world.

What's the least bad, bad thing that can happen?  Well, suppose a dust speck floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck.

For our next ingredient, we need a large number.  Let's use 3^^^3, written in Knuth's up-arrow notation:

  • 3^3 = 27.
  • 3^^3 = (3^(3^3)) = 3^27 = 7625597484987.
  • 3^^^3 = (3^^(3^^3)) = 3^^7625597484987 = (3^(3^(3^(... 7625597484987 times ...)))).

3^^^3 is an exponential tower of 3s which is 7,625,597,484,987 layers tall.  You start with 1; raise 3 to the power of 1 to get 3; raise 3 to the power of 3 to get 27; raise 3 to the power of 27 to get 7625597484987; raise 3 to the power of 7625597484987 to get a number much larger than the number of atoms in the universe, but which could still be written down in base 10, on 100 square kilometers of paper; then raise 3 to that power; and continue until you've exponentiated 7625597484987 times.  That's 3^^^3.  It's the smallest simple inconceivably huge number I know.

Now here's the moral dilemma.  If neither event is going to happen to you personally, but you still had to choose one or the other:

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

I think the answer is obvious.  How about you?

Comments (485)

Sort By: Old
Comment author: Tom_McCabe2 30 October 2007 03:25:11AM 3 points [-]

Does this analysis focus on pure, monotone utility, or does it include the huge ripple effect putting dust specks into so many people's eyes would have? Are these people with normal lives, or created specifically for this one experience?

Comment author: g 30 October 2007 03:36:34AM 31 points [-]

The answer that's obvious to me is that my mental moral machinery -- both the bit that says "specks of dust in the eye can't outweigh torture, no matter how many there are" *and* the bit that says "however small the badness of a thing, enough repetition of it can make it arbitrarily awful" or "maximize expected sum of utilities" -- wasn't designed for questions with numbers like 3^^^3 in. In view of which, I profoundly mistrust any answer I might happen to find "obvious" to the question itself.

Comment author: Anon6 30 October 2007 03:48:25AM 4 points [-]

Since there was a post about what seems obvious to the speaker might not be to the listener in this blog a few days ago, I thought I would point out that : It was NOT AT ALL obvious to me what should be preferred, torture 1 man for 50 years or speck of dust in 3^^^3 people. Can you please plase clarify/update what the point of the post was?

Comment author: Michael_G.R. 30 October 2007 04:12:35AM 4 points [-]

The dust speck is described as "barely enough to make you notice", so however many people it would happen to, it seems better than even something a lot less worse than 50 years of horrible torture. There are so many irritating things that a human barely notices in his/her life, what's an extra dust speck?

I think I'd trade the dust specks for even a kick in the groin.

But hey, maybe I'm missing something here...

Comment author: Eliezer_Yudkowsky 30 October 2007 04:24:10AM 4 points [-]

Anon, I deliberately didn't say what I thought, because I guessed that other people would think a different answer was "obvious". I didn't want to prejudice the responses.

Comment author: Anon_prime 30 October 2007 04:35:03AM 10 points [-]

Even when applying the cold cruel calculus of moral utilitarianism, I think that most people acknowledge that egalitarianism in a society has value in itself, and assign it positive utility. Would you rather be born into a country where 9/10 people are destitute (<$1000/yr), and the last is very wealthy (100,000/yr)? Or, be born into a country where almost all people subsist on a modest (6-8000/yr) amount?

Any system that allocates benefits (say, wealth) more fairly might be preferable to one that allocates more wealth in a more unequal fashion. And, the same goes for negative benefits. The dust specks may result in more total misery, but there is utility in distributing that misery equally.

Comment author: Kat 30 October 2007 04:56:38AM 0 points [-]

The dust specks seem like the "obvious" answer to me, but how large the tiny harm must be to cross the line where the unthinkably huge number of them outweighs a single tremendous one isn't something I could easily say, when clearly I don't think simply calculating the total amount of harm caused is the right measure.

Comment author: Kyle2 30 October 2007 05:13:46AM 5 points [-]

It seems obvious to me to choose the dust specks because that would mean that the human species would have to exist for an awfully long time for the total number of people to equal that number and that minimum amount of annoyance would be something they were used to anyway.

Comment author: Paul_Gowder 30 October 2007 05:31:12AM 3 points [-]

I too see the dust specks as obvious, but for the simpler reason that I reject utilitarian sorts of comparisons like that. Torture is wicked, period. If one must go further, it seems like the suffering from torture is *qualitatively* worse than the suffering from any number of dust specks.

Comment author: michael_vassar3 30 October 2007 05:34:47AM 12 points [-]

Anon prime: dollars are not utility. Economic egalitarianism is instrumentally desirable. We don't normally favor all types of equality, as Robin frequently points out.

Kyle: cute

Eliezer: My impulse is to choose the torture, even when I imagine very bad kinds of torture and very small annoyances (I think that one can go smaller than a dust mote, possibly something like a letter on the spine of a book that your eye sweeps over being in a shade less well selected a font). Then, however, I think of how much longer the torture could last and still not outweigh the trivial annoyances if I am to take the utilitarian perspective and my mind breaks. Condoning 50 years of torture, or even a day worth, is pretty much the same as condoning universes of agonium lasting for eons in the face of numbers like these, and I don't think that I can condone that for any amount of a trivial benefit.

Comment author: Tiiba2 30 October 2007 05:50:39AM 4 points [-]

Personally, I choose C: torture 3^^^3 people for 3^^^3 years. Why? Because I can.

Ahem. My morality is based on maximizing average welfare, while also avoiding extreme individual suffering, rather than cumulative welfare.

So torturing one man for fifty years is not preferable to annoying any number of people.

This is different when the many are also suffering extremely, though - then it may be worthwhile to torture one even more to save the rest.

Comment author: Jonathan_El-Bizri 30 October 2007 06:00:51AM 4 points [-]

Trivial annoyances and torture cannot be compared in this quantifiable manner. Torture is not only suffering, but lost opportunity due to imprisonment, permanent mental hardship, activation of pain and suffering processes in the mind, and a myriad of other unconsidered things.

And even if the torture was 'to have flecks of dust dropped in your eyes', you still can't compare a 'torturous amount' applied to one person, to substantial number dropped in the eyes of many people: We aren't talking about cpu cycles here - we are trying to quantify qualifiables.

If you revised the question, and specified stated exactly how the torture would affect the individual, and how they would react to it, and the same for each of the 'dust in the eyes' people (what if one goes blind? what of their mental capacity to deal with the hardship? what of the actual level of moisture in their eyes, and consequently the discomfort being felt?) then, maybe then, we could determine which was the worse outcome, and by how much.

There are simply too many assumptions that we have to make in this, mortal, world to determine the answer to such questions: you might as well as how many angels dance on the head of a pin. Or you could start more simply and ask: if you were to torture two people in exactly the same way, which one would suffer more, and by how much?

And you notice, I haven't even started to think about the ethical side of the question...

Comment author: Psy-Kosh 30 October 2007 06:14:05AM 10 points [-]

I think this all revolves around one question: Is "disutility of dust speck for N people" = N*"disutility of dust speck for one person"?

This, of course, depends on the properties of one's utility function.

How about this... Consider one person getting, say, ten dust specks per second for an hour vs 10*60*60 = 36,000 people getting a single dust speck each.

This is probably a better way to probe the issue at its core. Which of those situations is preferable? I would probably consider the second. However, I suspect one person getting a billion dust specks in their eye per second for an hour would be preferable to 1000 people getting a million per second for an hour.

Suffering isn't linear in dust specks. Well, actually, I'm not sure subjective states in general can be viewed in a linear way. At least, if there is a potentially valid "linear qualia theory", I'd be surprised.

But as far as the dust specks vs torture thing in the original question? I think I'd go with dust specks for all.

But that's one person vs buncha people with dustspecks.

Comment author: Psy-Kosh 30 October 2007 06:24:01AM 10 points [-]

Oh, just had a thought. A less extreme yet quite related real world situation/question would be this: What is appropriate punishment for spammers?

Yes, I understand there're a few additional issues here, that would make it more analogous to, say, if the potential torturee was planning on deliberately causing all those people a DSE (Dust Speck Event)

But still, the spammer issue gives us a more concrete version, involving quantities that don't make our brains explode, so considering that may help work out the principles by which these sorts of questions can be dealt with.

Comment author: Jonathan_El-Bizri 30 October 2007 06:51:19AM 1 point [-]

The problem with spammers isn't the cause of a singular dust spec event: it's the cause of multiple dust speck events repeatedly to individuals in the population in question. It's also a 'tragedy of the commons' question, since there is more than one spammer.

To respond to your question: What is appropriate punishment for spammers? I am sad to conclude that until Aubrey DeGray manages to conquer human mortality, or the singularity occurs, there is no suitable punishment for spammers.

After either of those, however, I would propose unblocking everyone's toilets and/or triple shifts as a Fry's Electronics floor lackey until the universal heat death, unless you have even >less< interesting suggestions.

Comment author: Pete_Carlton 30 October 2007 06:52:08AM 11 points [-]

If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would. In fact, I would probably make the trade even if it were 2 or 3 times longer-lasting and of the same intensity. But something doesn't make sense now... am I saying I would gladly double or triple the pain I feel over my whole life?

The upshot is that there are some very nonlinear phenomena involved with calculating amounts of suffering, as Psy-Kosh and others have pointed out. You may indeed move along one coordinate in "suffering-space" by 3^^^3 units, but it isn't just absolute magnitude that's relevant. That is, you cannot recapitulate the "effect" of fifty years of torturing with isolated dust specks. As the responses here make clear, we do not simply map magnitudes in suffering space to moral relevance, but instead we consider the actual locations and contours. (Compare: you decide to go for a 10-mile hike. But your enjoyment of the hike depends more on where you go, than the distance traveled.)

Comment author: JoeSchmoe 12 September 2009 07:44:13AM 8 points [-]

"If you could take all the pain and discomfort you will ever feel in your life, and compress it into a 12-hour interval, so you really feel ALL of it right then, and then after the 12 hours are up you have no ill effects - would you do it? I certainly would.""

Hubris. You don't know, can't know, how that pain would/could be instrumental in processing external stimuli in ways that enable you to make better decisions.

"The sort of pain that builds character, as they say".

The concept of processing 'pain' in all its forms is rooted very deep in humanity -- get rid of it entirely (as opposed to modulating it as we currently do), and you run a strong risk of throwing the baby out with the bathwater, especially if you then have an assurance that your life will have no pain going forward. There's a strong argument to be made for deference to traditional human experience in the face of the unknown.

Comment author: James_Bach 30 October 2007 07:30:57AM 7 points [-]

Yes the answer is obvious. The answer is that this question obviously does not yet have meaning. It's like an ink blot. Any meaning a person might think it has is completely inside his own mind. Is the inkblot a bunny? Is the inkblot a Grateful Dead concert? The right answer is not merely unknown, because there is no possible right answer.

A serious person-- one who take moral dilemmas seriously, anyway-- must learn more before proceeding.

The question is an inkblot because too many crucial variables have been left unspecified. For instance, in order for this to be an interesting moral dilemma I need to know that it is a situation that is physically possible, or else analogous to something that is possible. Otherwise, I can't know what other laws of physics or logic apply or don't apply, and therefore can't make an assessment. I need to know what my position is in this universe. I need to know why this power has been invested in me. I need to know the nature of the torture and who the person is who will be tortured. I need to consider such factors as what the torture may mean to other people who are aware of it (such as the people doing the torture). I need to know something about the costs and benefits involved. Will the person being tortured *know* they are being tortured? Or can it be arranged that they are born into the torture and consider it a normal part of their life. Will the person being tortured have *volunteered* to have been tortured? Will the dust motes have peppered the eyes of all those people anyway? Will the torture have happened anyway? Will choosing torture save other people from being tortured?

It would seem that torture is bad. On the other hand, just being alive is a form of torture. Each of us has a Sword of Damocles hanging over us. It's called mortality. Some people consider it torture when I keep telling them they haven't finished asking their question...

Comment author: douglas 30 October 2007 07:45:40AM 0 points [-]

The non-linear nature of 'qualia' and the difficulty of assigning a utility function to such things as 'minor annoyance' has been noted before. It seems to some insolvable. One solution presented by Dennett in 'Consciousness Explained' is to suggest that there is no such thing as qualia or subjective experience. There are only objective facts. As Searle calls it 'consciousness denied'. With this approach it would (at least theoretically) be possible to objectively determine the answer to this question based on something like the number of ergs needed to fire the neurons that would represent the outcomes of the two different choices. The idea of which would be the more/less pleasant experience is therefore not relevant as there is no subjective experience to be had in the first place. Of course I'm being sloppy here- the word choice would have to be re-defined to include that each action is determined by the physical configuration of the brain and that the chooser is in fact a fictional construct of that physical configuration. Otherwise, I admit that 3^^^3 people is not something I can easily contemplate, and that clouds my ability to think of an answer to this question.

Comment author: Psy-Kosh 30 October 2007 07:49:17AM 0 points [-]

Uh... If there's no such thing as qualia, there's no such thing as actual suffering, unless I misunderstand your description of Dennett's views.

But if my understanding is correct, and those views were correct, then wouldn't the answer be "nobody actually exists to care one way or another?" (Or am I sorely mistaken in interpreting that view?)

Comment author: James_Bach 30 October 2007 07:54:48AM 0 points [-]

Regarding your example of income disparity: I might rather be born into a system with very unequal incomes, if, as in America (in my personal and biased opinion), there is a reasonable chance of upping my income through persistence and pluck. I mean hey, that guy with all that money has to spend it somewhere-- perhaps he'll shop at my superstore!

But wait, what does wealth mean? In the case where everyone has the same income, where are they spending their money? Are they all buying the same things? Is this a totalitarian state? An economy without disparity is pretty disturbing to contemplate, because it means no one is making an effort to do better than other people, or else no one *can* do better. Money is not being concentrated or funnelled anywhere. Sounds like a pretty moribund economy.

If it's a situation where everyone always gets what they want and need, then wealth will have lost its conventional meaning, and no one will care whether one person is rich and another one isn't. What they will care about is the success of their God, their sports teams, and their children.

I guess what I'm saying is that there may be no interesting way to simplify interesting moral dilemmas without destroying the dilemma or rendering it irrelevant to natural dilemmas.

Comment author: J_Thomas 30 October 2007 08:21:01AM 7 points [-]

If even one in a hundred billion of the people is driving and has an accident because of the dust speck and gets killed, that's a tremendous number of deaths. If one in a hundred quadrillion of them survives the accident but is mangled and spends the next 50 years in pain, that's also a tremendous amount of torture.

If one in a hundred decillion of them is working in a nuclear power plant and the dust speck makes him have a nuclear accident....

We just aren't designed to think in terms of 3^^^3. It's too big. We don't habitually think much about one-in-a-million chances, much less one in a hundred decillion. But a hundred decillion is a very small number compared to 3^^^3.

Comment author: g 30 October 2007 10:06:46AM 1 point [-]

Douglas and Psy-Kosh: Dennett explicitly says that in denying that there are such things as qualia he is not denying the existence of conscious experience. Of course, Douglas may think that Dennett is lying or doesn't understand his own position as well as Douglas does.

James Bach and J Thomas: I think Eliezer is asking us to assume that there are no knock-on effects in either the torture or the dust-speck scenario, and the usual assumption in these "which economy would you rather have?" questions is that the numbers provided represent the situation *after* all parties concerned have exerted whatever effort they can. (So, e.g., if almost everyone is described as destitute, then it must be a society in which escaping destitution by hard work is very difficult.) Of course I agree with both of you that there's danger in this sort of simplification.

Comment author: Sebastian_Hagen2 30 October 2007 10:26:46AM 7 points [-]

J Thomas: You're neglecting that there might be some positive-side effects for a small fraction of the people affected by the dust specks; in fact, there is some precedent for this. The resulting average effect is hard to estimate, but (considering that dust specks seem to mostly add entropy to the thought processes of the affected persons), would likely still be negative.

Copying g's assumption that higher-order effects should be neglected, I'd take the torture. For each of the 3^^^3 persons, the choice looks as follows:

1.) A 1/(3^^^3) chance of being tortured for 50 years. 2.) A 1 chance of getting a dust speck.

I'd definitely prefer the former. That probability is so close to zero that it vastly outweighs the differences in disutility.

Comment author: Rick_Smith 30 October 2007 10:49:23AM 2 points [-]

Hmm, tricky one.

Do I get to pick the person who has to be tortured?

Comment author: Tomhs2 30 October 2007 11:03:53AM 1 point [-]

As I read this I knew my answer would be the dust specks. Since then I have been mentally evaluating various methods for deciding on the ethics of the situation and have chosen the one that makes me feel better about the answer I instinctively chose.

I can tell you this though. I reckon I personally would choose max five minutes of torture to stop the dust specks event happening. So if the person threatened with 50yrs of torture was me, I'd choose the dust specks.

Comment author: Benquo 30 October 2007 11:49:14AM 5 points [-]

What if it were a repeatable choice?

Suppose you choose dust specks, say, 1,000,000,000 times. That's a considerable amount of torture inflicted on 3^^^3 people. I suspect that you could find the number of times equivalent to torturing each of thoes 3^^^3 people 50 years, and that number would be smaller than 3^^^3. In other words, choose the dust speck enough times, and more people would be tortured effectually for longer than if you chose the 50-year torture an equivalent number of times.

If that math is correct, I'd have to go with the torture, not the dust specks.

Comment author: Zubon2 30 October 2007 12:30:32PM 7 points [-]

Kyle wins.

Absent using this to guarantee the nigh-endless survival of the species, my math suggests that 3^^^3 beats anything. The problem is that the speck rounds down to 0 for me.

There is some minimum threshold below which it just does not count, like saying, "What if we exposed 3^^^3 people to radiation equivalent to standing in front of a microwave for 10 seconds? Would that be worse than nuking a few cities?" I suppose there must be someone in 3^^^3 who is marginally close enough to cancer for that to matter, but no, that rounds down to 0. For the speck, I am going to blink in the next few seconds anyway.

That in no way addresses the intent of the question, since we can just increase it to the minimum that does not round down. Being poked with a blunt stick? Still hard, since I think every human being would take one stick over some poor soul being tortured. Do I really get to be the moral agent for 3^^^3 people?

As others have said, our moral intuitions do not work with 3^^^3.

Comment author: Robin_Hanson2 30 October 2007 12:30:53PM 5 points [-]

Wow. The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too. But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet. What does that say about our abilities in moral reasoning?

Comment author: Caledonian2 30 October 2007 12:44:18PM -1 points [-]

Given that human brains are known not to be able to intuitively process even moderately large numbers, I'd say the question can't meaningfully be asked - our ethical modules simply can't process it. 3^^^3 is too large - WAY too large.

Comment author: jason_braswell 30 October 2007 01:37:10PM 3 points [-]

I'm unconvinced that the number is too large for us to think clearly. Though it takes some machinery, humans reason about infinite quantities all the time and arrive at meaningful conclusions.

My intuitions strongly favor the dust speck scenario. Even if forget 3^^^^3 and just say that an infinite number of people will experience the speck, I'd still favor it over the torture.

Comment author: cw 30 October 2007 01:38:23PM 8 points [-]

Robin is absolutely wrong, because different instances of human suffering cannot be added together in any meaningful way. The cumulative effect when placed on one person is far greater than the sum of many tiny nuisances experienced by many. Whereas small irritants such as a dust mote do not cause "suffering" in any standard sense of the word, the sum total of those motes concentrated at one time and placed into one person's eye could cause serious injury or even blindness. Dispersing the dust (either over time or across many people) mitigates the effect. If the dispersion is sufficient, there is actually no suffering at all. To extend the example, you could divide the dust mote into even smaller particles, until each individual would not even be aware of the impact.

So the question becomes, would you rather live in a world with little or no suffering (caused by this particular event) or a world where one person suffers badly, and those around him or her sit idly by, even though they reap very little or no benefit from the situation?

The notion of shifting human suffering onto one unlucky individual so that the rest of society can avoid minor inconveniences is morally reprehensible. That (I hope) is why no one has stood up and shouted yeay for torture.

Comment author: Constant2 30 October 2007 01:42:00PM 4 points [-]

The obvious answer is TORTURE, all else equal, and I'm pretty sure this is obvious to Eliezer too.

That is the straightforward utilitarian answer, without any question. However, it is not the common intuition, and even if Eliezer agrees with you he is evidently aware that the common intuition disagrees, because otherwise he would not bother blogging it. It's the contradiction between intuition and philosophical conclusion that makes it an interesting topic.

Comment author: scott_clark 30 October 2007 02:13:05PM 5 points [-]

Robin's answer hinges on "all else being equal." That condition can tie up a lot of loose ends, it smooths over plenty of rough patches. But those ends unravel pretty quickly once you start to consider all the ways in which everything else is inherently unequal. I happen to think the dust speck is a 0 on the disutility meter, myself, and 3^^^3*0 disutilities = 0 disutility.

Comment author: Benoit_Essiambre 30 October 2007 02:26:33PM 2 points [-]

I believe that ideally speaking the best choice is the torture, but pragmatically, I think the dust speck answer can make more sense. Of course it is more intuitive morally, but I would go as far as saying that the utility can be higher for the dust specks situation (and thus our intuition is right). How? the problem is in this sentence: "If neither event is going to happen to you personally," the truth is that in the real world, we can't rely on this statement. Even if it is promised to us or made into a law, this type of statements often won't hold up very long. Precedents have to be taken into account when we make a decision based on utility. If we let someone be tortured now, we are building a precedent, a tradition of letting people being tortured. This has a very low utility for people living in the affected society. This is well summarized in the saying "What goes around comes around".

If you take the strict idealistic situation described, the torture is the best choice. But if you instead deem the situation to be completely unrealistic and you pick a similar one by simply not giving a 100% reliability on the sentence: "If neither event is going to happen to you personally," the best choice can become the dust specks, depending on how much you believe the risk of a tradition of torture will be established. (and IMO traditions of torture and violence is the kind of thing that spreads easily as it stimulates resentment and hatred in the groups that are more affected.) The torture situation has much risk of getting worst but not the dust speck situation.

The scenario might have been different if torture was replaced by a kind of suffering that is not induced by humans. Say... an incredibly painful and long (but not contagious) illness.

Is it better to have the dust specks everywhere all the time or to have the existence of this illness once in history?

Comment author: gaverick 30 October 2007 02:32:32PM 1 point [-]
Comment author: Michael_G.R. 30 October 2007 02:42:08PM 6 points [-]

Robin, could you explain your reasoning. I'm curious.

Humans get barely noticeable "dust speck equivalent" events so often in their lives that the number of people in Eliezer's post is irrelevant; it's simply not going to change their lives, even if it's a gazillion lives, even with a number bigger than Eliezer's (even considering the "butterfly effect", you can't say if the dust speck is going to change them for the better or worse -- but with 50 years of torture, you know it's going to be for the worse).

Subjectively for these people, it's going to be lost in the static and probably won't even be remembered a few seconds after the event. Torture won't be lost in static, and it won't be forgotten (if survived).

The alternative to torture is so mild and inconsequential, even if applied to a mind-boggling number of people, that it's almost like asking: Would you rather torture that guy or not?

Comment author: Benquo 30 October 2007 02:52:12PM 0 points [-]

@Robin,

"But even though there are 26 comments here, and many of them probably know in their hearts torture is the right choice, no one but me has said so yet."

I thought that Sebastian Hagen and I had said it. Or do you think we gave weasel answers? Mine was only contingent on my math being correct, and I thought his was similarly clear.

Perhaps I was unclear in a different way. By asking if the choice was repeatable, I didn't mean to dodge the question; I meant to make it more vivid. Moral questions are asked in a situation where many people are making moral choices all the time. If dust-speck displeasure is additive, then we should evaluate our choices based on their potential aggregate effects.

Essentially, it's a same-ratio problem, like showing that 6:4::9:6, because 6x3=9x2 and 4x3=6x2. If the aggregate of dust-specking can ever be greater than the equivalent aggregate of torturing, then it is always greater.

Comment author: Michael_G.R. 30 October 2007 03:03:57PM 1 point [-]

Hmm, thinking some more about this, I can see another angle (not the suffering angle, but the "being prudent about unintended consequences" angle):

If you had the choice between very very slightly changing the life of a huge number of people or changing a lot the life of only one person, the prudent choice might be to change the life of only one person (as horrible as that change might be).

Still, with the dust speck we can't really know if the net final outcome will be negative or positive. It might distract people who are about to have genius ideas, but it might also change chains of events that would lead to bad things. Averaged over so many people, it's probably going to stay very close to neutral, positive or negative. The torture of one person might also look very close to neutral if averaged with the other 3^^^3 people, but we *know* that it's going to be negative. Hmm..

Comment author: Recovering_irrationalist 30 October 2007 03:23:11PM 2 points [-]

Would you prefer that one person be horribly tortured for fifty years without hope or rest, or that 3^^^3 people get dust specks in their eyes?

The square of the number of milliseconds in 50 years is about 10^21.

Would you rather one person tortured for a millisecond (then no ill effects), or that 3^^^3/10^21 people get a dust speck per second for 50 centuries?

OK, so the utility/effect doesn't scale when you change the times. But even if each 1% added dust/torture time made things ten times worse, when you reduce the dust-speckled population to reflect that it's still countless universes worth of people.

Comment author: Bob3 30 October 2007 03:27:29PM 0 points [-]

I'm with Tomhs. The question has less value as a moral dilemma than as an opportunity to recognize how we think when we "know" the answer. I intentionally did not read the comments last night so I could examine my own thought process, and tried very hard to hold an open mind (my instinct was dust). It's been a useful and interesting experience. Much better than the brain teasers which I can generally get because I'm on hightened alert when reading El's posts. Here being on alert simply allowed me to try to avoid immediately giving in to my bias.

Comment author: Vladimir_Nesov 30 October 2007 03:36:52PM 3 points [-]

Averaging utility works only when law of large numbers starts to play a role. It's a good general policy, as stuff subject to it happens all the time, enough to give sensible results over the human/civilization lifespan. So, if Eliezer's experiment is a singular event and similar events don't happen frequently enough, answer is 3^^^3 specks. Otherwise, torture (as in this case, similar frequent enough choices would lead to a tempest of specks in anyone's eye which is about 3^^^3 times worse then 50 years of torture, for each and every one of them).

Comment author: Robin_Hanson2 30 October 2007 03:39:17PM 0 points [-]

Benquo, your first answer seems equivocal, and so did Sebastian's on a first reading, but now I see that it was not.

Comment author: James_D._Miller 30 October 2007 03:39:26PM 13 points [-]

Torture,

Consider three possibilities:

(a) A dusk speck hits you with probability one, (b) You face an additional probability 1/( 3^^^3) of being tortured for 50 years, (c) You must blink your eyes for a fraction of a second, just long enough to prevent a dusk speck from hitting you in the eye.

Most people would pick (c) over (a). Yet, 1/( 3^^^3) is such a small number that by blinking your eyes one more time than you normally would you increase your chances of being captured by a sadist and tortured for 50 years by more than 1/( 3^^^3). Thus, (b) must be better than (c). Consequently, most people should prefer (b) to (a).

Comment author: Mike_Kenny 30 October 2007 03:48:46PM 0 points [-]

There isn't any right answer. Answers to what is good or bad is a matter of taste, to borrow from Nietzsche.

To me the example has messianic quality. One person suffers immensely to save others from suffering. Does the sense that there is a 'right' answer come from a Judeo-Christian sense of what is appropriate. Is this a sort of bias in line with biases towards expecting facts to conform to a story?

Also, this example suggests to me that the value pluralism of Cowen makes much more sense than some reductive approach that seeks to create one objective measure of good and bad. One person might seek to reduce instances of illness, another to maximize reported happiness, another to maximize a personal sense of beauty. IMO, there isn't a judge who will decide who is right and who is wrong, and the decisive factor is who can marhsal the power to bring about his will, as unsavory as that might be (unless your side is winning).

Comment author: Tom_Crispin 30 October 2007 04:34:13PM 3 points [-]

Why is this a serious question? Given the physical unreality of the situation, the putative existence of 3^^^3 humans and the ability to actually create the option in the physical universe - why is this question taken seriously while something like is it better to kill Santa Claus or the Easter Bunny considered silly?

Comment author: Jef_Allbright 30 October 2007 04:36:06PM 0 points [-]

Fascinating, and scary, the extent to which we adhere to established models of moral reasoning despite the obvious inconsistencies. Someone here pointed out that the problem wasn't sufficiently defined, but then proceeded to offer examples of objective factors that would appear necessary to evaluation of a consequentialist solution. Robin seized upon the "obvious" answer that any significant amount of discomfort, over such a vast population, would easily dominate, with any conceivable scaling factor, the utilitarian value of the torture of a single individual. But I think he took the problem statement too literally; the discomfort of the dust mote was intended to be vanishingly small, over a vast population, thus keeping the problem interesting rather than "obvious."

But most interesting to me is that no one pointed out that fundamentally, the assessed goodness of any act is a function of the values (effective, but not necessarily explicit) of the assessor. And assessed morality as a function of group agreement on the "goodness" of an act, promoting the increasingly coherent values of the group over increasing scope of expected consequences.

Now the values of any agent will necessarily be rooted in an evolutionary branch of reality, and this is the basis for increasing agreement as we move toward the common root, but this evolving agreement in principle on the *direction* of increasing morality should never be considered to point to any particular *destination* of goodness or morality in any objective sense, for that way lies the "repugnant conclusion" and other paradoxes of utilitarianism.

Obvious? Not at all, for while we can increasingly converge on principles promoting "what works" to promote our increasingly coherent values over increasing scope, our expression of those values will increasingly diverge.

Comment author: George_Dvorsky 30 October 2007 04:48:57PM 6 points [-]

The hardships experienced by a man tortured for 50 years cannot compare to a trivial experience massively shared by a large number of individuals -- even on the scale that Eli describes. There is no accumulation of experiences, and it cannot be conflated into a larger meta dust-in-the-eye experience; it has to be analyzed as a series of discreet experiences.

As for larger social implications, the negative consequence of so many dust specked eyes would be negligible.

Comment author: Eliezer_Yudkowsky 30 October 2007 04:51:42PM 6 points [-]

Wow. People sure are coming up with interesting ways of avoiding the question.

Comment author: Jef_Allbright 30 October 2007 05:07:14PM 1 point [-]

Eliezer wrote "Wow. People sure are coming up with interesting ways of avoiding the question."

I posted earlier on what I consider the more interesting question of how to frame the problem in order to best approach a solution.

If I were to simply provide my "answer" to the problem, with the assumption that the dust in the eyes is likewise limited to 50 years, then I would argue that the dust is to be preferred to the torture, not on a utilitarian basis of relative weights of the consequences as specified, but on the bigger-picture view that my preferred future is one in which torture is abhorrent in principle (noting that this entails significant indirect consequences not specified in the problem statement.)

Comment author: g 30 October 2007 05:10:07PM 1 point [-]

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

As for your link to the "motivated stopping" article, I don't quite see why declining to decide on this is any more "stopping" than choosing a definite one of the options. Or are you suggesting that it's an instance of motivated continuation? Perhaps it is, but (as you said in that article) the problem with excessive "continuation" is that it can waste resources and miss opportunities. I don't see either of those being an issue here, unless you're actually threatening to do one of those two things -- in which case I declare you a Pascal's mugger and take no notice.

Comment author: Brandon_Reinhart 30 October 2007 05:15:00PM 2 points [-]

What happens if there aren't 3^^^3 instanced people to get dust specks? Do those specks carry over such that person #1 gets a 2nd speck and so on? If so, you would elect to have the person tortured for 50 years for surely the alternative is to fill our universe with dust and annihilate all cultures and life.

Comment author: Neel_Krishnaswami 30 October 2007 05:28:00PM -1 points [-]

Robin, of course it's not obvious. It's only an obvious conclusion if the global utility function from the dust specks is an additive function of the individual utilities, and since we know that utility functions must be bounded to avoid Dutch books, we know that the global utility function cannot possibly be additive -- otherwise you could break the bound by choosing a large enough number of people (say, 3^^^3).


From a more metamathematical perspective, you can also question whether 3^^3 is a number at all. It's perfectly straightforward to construct a perfectly consistent mathematics that rejects the axiom of infinity. Besides the philosophical justification for ultrafinitism (ie, infinite sets don't really exist), these theories corresponds to various notions of bounded computation (such as logspace or polytime). This is a natural requirement, if we want to require moral judgements to be made quickly enough to be relevant to decision making -- and that rules out seriously computing with numbers like 3^^^3.

Comment author: Eliezer_Yudkowsky 30 October 2007 05:32:00PM 6 points [-]

Eliezer, are you suggesting that declining to make up one's mind in the face of a question that (1) we have excellent reason to mistrust our judgement about and (2) we have no actual need to have an answer to is somehow disreputable?

Yes, I am.

Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass.

Regarding (2), whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question.

Followup dilemmas:

For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?

For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

Comment author: Kaj_Sotala 30 October 2007 05:40:00PM 4 points [-]

Fascinating question. No matter how small the negative utility in the dust speck, multiplying it with a number such as 3^^^3 will make it way worse than torture. Yet I find the obvious answer to be the dust speck one, for reasons similar to what others have pointed out - the negative utility rounds down to zero.

But that doesn't really solve the problem, for what if the harm in question was slightly larger? At what point does it cease rounding down? I have no meaningful criteria to give for that one. Obviously there must be a point where it does cease doing so, for it certainly is much better to torture one person for 50 years than 3^^^3 people for 49 years.

It is quite counterintuitive, but I suppose I should choose the torture option. My other alternatives would be to reject utilitarianism (but I have no better substitutes for it) or to modify my ethical system so that it solves this problem, but I currently cannot come up with an unproblematic way of doing so.

Still, I can't quite bring myself to do so. I choose specks, and admit that my ethical system is not consistent yet. (Not that it would be a surprise - I've noticed that all my attempts at building entirely consistent ethical systems tend to cause unwanted results at one point or the other.)


For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?

A single penny to avoid one dust speck, or to avoid 3^^^3 dust specks? No to the first one. To the second one, depends on how often they occured - if I somehow could live for 3^^^3 years, getting one dust speck in my eye per year, then no. If they actually inconvenienced me, then yes - a penny is just a penny.

Comment author: Jef_Allbright 30 October 2007 05:48:00PM 0 points [-]

"Regarding (1), we pretty much always have excellent reason to mistrust our judgments, and then we have to choose anyway; inaction is also a choice. The null plan is a plan. As Russell and Norvig put it, refusing to act is like refusing to allow time to pass."

This goes to the crux of the matter, why to the extent the future is uncertain, it is better to decide based on principles (representing wisdom encoded via evolutionary processes over time) rather than on the flat basis of expected consequences.

Comment author: Zubon 30 October 2007 05:53:00PM 1 point [-]

Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

Is the question significantly changed by this rephrasing? It makes SPECKS the default choice, and it changes 3^^^3 to "all." Are we better able to process "all" than 3^^^3, or can we really process "all" at all? Does it change your answer if we switch the default?

Would you force every qualia-experiencing being who will ever exist to blink one additional time to save one person from being horribly tortured for fifty years without hope or rest?

Comment author: Brandon_Reinhart 30 October 2007 05:59:00PM 0 points [-]

> For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

If you mean would I condemn all conscious beings to a googolplex of torture to avoid universal annihilation from a big "dust crunch" my answer is still probably yes. The alternative is universal doom. At least the tortured masses might have some small chance of finding a solution to their problem at some point. Or at least a googolplex years might pass leaving some future civilization free to prosper. The dust is absolute doom for all potential futures.

Of course, I'm assuming that 3^^^3 conscious beings are unlikely to ever exist and so that dust would be applied over and over to the same people causing the universe to be filled with dust. Maybe this isn't how the mechanics of the problem work.

Comment author: Brandon_Reinhart 30 October 2007 06:02:00PM 1 point [-]

> Would you condemn one person to be horribly tortured for fifty years without hope or rest, to save every qualia-experiencing being who will ever exist one blink?

That's assuming you're interpreting the question correctly. That you aren't dealing with an evil genie.

Comment author: Zeus 30 October 2007 06:04:00PM 1 point [-]

You never said we couldn't choose who specifically gets tortured, so I'm assuming we can make that selection. Given that, the once agonizingly difficult choice is made trivially simple. I would choose 50 years of torture for the person who made me make this decision.

Comment author: Kat3 30 October 2007 06:20:00PM 1 point [-]

Since I chose the specks -- no, I probably wouldn't pay a penny; avoiding the speck is not even worth the effort to decide to pay the penny or not. I would barely notice it; it's too insignificant to be worth paying even a tiny sum to avoid.

I suppose I too am "rounding down to zero"; a more significant harm would result in a different answer.

Comment author: Michael_G.R. 30 October 2007 06:36:00PM 0 points [-]

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

To avoid *all* the dust specks, yeah, I'd pay a penny and more. Not a penny per speck, though ;)

The reason is to avoid having to deal with the "unintended consequences" of being responsible for that very very small change over such a large number of people. It's bound to have some significant indirect consequences, both positive and negative, on the far edges of the bell curve... the net impact could be negative, and a penny is little to pay to avoid responsibility for that possibility.

Comment author: Marcello 30 October 2007 06:52:00PM 5 points [-]

The first thing I thought when I read this question was that the dust specks were obviously preferable. Then I remembered that my intuition likes to round 3^^^3 down to something around twenty. Obviously, the dust specks are preferable to the torture for any number at all that I have any sort of intuitive grasp over.

But I found an argument that pretty much convinced me that the torture was the correct answer.

Suppose that instead of making this choice once, you will be faced with the same choice 10^17 times for the next fifty years (This number was chosen so that it was more than a million per second.) If you have a problem imagining the ability to make more than a million choices per second, imagine that you have a dial in front of you which goes from zero to a 10^17. If you set the dial to n, then 10^17-n people will get tortured starting now for the next fifty years, and n dust specks will fly into the eyes of each of 3^^^3 people during the next fifty years.

The dial starts at zero. For each unit that you turn the dial up, you are saving one person from being tortured by putting a dust speck in the eyes of each of the 3^^^3 people, the exact choice presented.

So, if you thought the correct answer was the dust specks, you'd turn the dial from zero to one right? And then you'd turn it from one to two, right?

But, if you turned the dial all the way up to 10^17, you'd effectively be rubbing the corneas of the 3^^^3 people with sandpaper for fifty years (of course, their corneas would wear through, and their eyes would come apart under that sort of abrasion. It would probably take less than a million dust specks per second to do that, but let's be conservative and make them smaller dust specks.) Even if you don't count the pain involved, they'd be blind forever. How many people would you blind in order to save one person from being tortured for fifty years? You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you
have saved from torture.


So if your answer was the dust specks, you'd either end up turning the knob all the way up to 10^17, or you'd have to stop somewhere, because there's no escaping that in this scenario, there's a real dial in front of you, and you have to turn it to some n between 0 and a 10^17.


If you left the dial on, say, 10^10, I'd ask "Tell me, what is so special about the difference between hitting someone with 10^10 dust specs versus hitting them with 10^10+1, that wasn't special about the difference between hitting them with zero versus one?" If anything, the more dust specks there are, the less of a difference one more would make.

There are easily 10^17 continuous gradations between no inconvenience and having ones eyes turned to pulp, and I don't really see what would make any of them terribly different from each other. Yet n=0 is obviously preferable to n=10^17, and so, each individual increment of n must be bad.

Comment author: Tom_Crispin 30 October 2007 06:57:00PM 1 point [-]

"... whenever a tester finds a user input that crashes your program, it is always bad - it reveals a flaw in the code - even if it's not a user input that would plausibly occur; you're still supposed to fix it. "Would you kill Santa Claus or the Easter Bunny?" is an important question if and only if you have trouble deciding. I'd definitely kill the Easter Bunny, by the way, so I don't think it's an important question."

I write code for a living; I do not claim that it crashes the program. Rather the answer is irrelevant as I don't think that the question is important or insightful regarding our moral judgements since it lacks physical plausibility. BTW, since one can think of God as "Santa Claus for grown-ups", the Easter Bunny lives.

Comment author: Eliezer_Yudkowsky 30 October 2007 06:58:00PM 2 points [-]

By "pay a penny to avoid the dust specks" I meant "avoid all dust specks", not just one dust speck. Obviously for one speck I'd rather have the penny.

Comment author: Recovering_irrationalist 30 October 2007 07:11:00PM 0 points [-]

what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

To reduce suffering in general rather than your own (it would be tough to live with), bring on the coddling grinders. (10^10^100)^2 is a joke next to 3^^^3.

Having said that, it depends on the qualia-experiencing population of all existence compared to the numbers affected, and whether you change existing lives or make new ones. If only a few googolplex-squared people-years exist anyway, I vote dust.

I also vote to kill the bunny.

Comment author: Sebastian_Hagen2 30 October 2007 07:27:00PM 0 points [-]

For those who would pick TORTURE, what about Vassar's universes of agonium? Say a googolplex-persons' worth of agonium for a googolplex years.

Torture, again. From the perspective of each affected individual, the choice becomes:

1.) A (10**(10**100))/(3^^^3) chance of being tortured for (10**(10**100)) years.
2.) A 1 chance of a dust speck.
(or very slightly different numbers if the (10**(10**100)) people exist in addition to the 3^^^3 people; the difference is too small to be noticable)

I'd still take the former. (10**(10**100))/(3^^^3) is still so close to zero that there's no way I can tell the difference without getting a larger universe for storing my memory first.

Comment author: g 30 October 2007 07:44:00PM 5 points [-]

Eliezer, it's the combination of (1) totally untrustworthy brain machinery and (2) no immediate need to make a choice that I'm suggesting means that withholding judgement is reasonable. I completely agree that you've found a bug; congratulations, you may file a bug report and add it to the many other bug reports already on file; but how do you get from there to the conclusion that the right thing to do is to make a choice between these two options?

When I read the question, I didn't go into a coma or become psychotic. I didn't even join a crazy religion or start beating my wife. If for some reason I actually had to make such a choice, I still wouldn't go nuts. So I think analogies with crashing software are inappropriate. (Again, I don't deny that there's a valid bug report. I'm just questioning its severity.)

So what we have here is an architectural problem with the software, which produces a failure mode in which input radically different from any that will ever actually be supplied provokes a small user-interface glitch. It would be nice to fix it, but it doesn't strike me as unreasonable if it doesn't make it through some people's triage.

(Santa Claus versus the Easter Bunny is much nearer to being a realistic question, and so far as I can tell there isn't anything in my mental machinery that fundamentally isn't equipped to consider it. Kill the bunny.)

Comment author: Gordon_Worley 30 October 2007 08:05:00PM 0 points [-]

Let's suppose we measure pain in pain points (pp). Any event which can cause pain is given a value in [0, 1], with 0 being no pain and 1 being the maximum amount of pain perceivable. To calculate the pp of an event, assign a value to the pain, say p, and then multiply it by the number of people who will experience the pain, n. So for the torture case, assume p = 1, then:

torture: 1*1 = 1 pp

For the spec in eye case, suppose it causes the least amount of pain greater than no pain possible. Denote this by e. Assume that the dust speck causes e amount of pain. Then if e < 1/3^^^3

spec: 1 * e < 1 pp

and if e > 1/3^^^3

spec: 1 * e > 1 pp

So assuming our moral calculus is to always choose whichever option generates the least pp, we need only ask if e is greater than or less than 1/n.

If you've been paying attention, I now have an out to give no answer: we don't know what e is, so I can't decide (at least not based on pp). But I'll go ahead and wager a guess. Since 1/3^^^3 is very small, I think that most likely any pain sensing system of any present or future intelligence will have e > 1/3^^^3, then I must choose torture because torture costs 1 pp but the specs cost more than 1 pp.

This doesn't feel like what, as a human, I would expect the answer to be. I want to say don't torture the poor guy and all the rest of us will suffer the spec so he need not be tortured. But I suspect this is human inability to deal with large numbers, because I think about how I would be willing to accept a spec so the guy wouldn't be torture since e pp < 1 pp, and every other individual, supposing they were pp-fearing people, would make the same short-sighted choice. But the net cost would be to distribute more pain with the specs than the torture ever would.

Weird how the human mind can find a logical answer and still expect a nonlogical answer to be the truth.

Comment author: Tom_McCabe 30 October 2007 08:07:00PM 0 points [-]

"Wow. People sure are coming up with interesting ways of avoiding the question."

My response was a real request for information- if this is a pure utility test, I would select the dust specks. If this were done to a complex, functioning society, adding dust specks into everyone's eyes would disrupt a great deal of important stuff- someone would almost certainly get killed in an accident due to the distraction, even on a planet with only 10^15 people and not 3^^^^3.

Comment author: Neel_Krishnaswami 30 October 2007 08:19:00PM 0 points [-]

Eliezer, in your response to g, are you suggesting that we should strive to ensure that our probability distribution over possible beliefs sum to 1? If so, I disagree: I don't think this can be considered a plausible requirement for rationality. When you have no information about the distribution, you ought to assign probabilities uniformly, according to Laplace's principle of indifference. But the principle of indifference only works for distributions over finite sets. So for infinite sets you have to make an arbitrary choice of distribution, which violates indifference.

Comment author: Tom_McCabe 30 October 2007 08:21:00PM 4 points [-]

"For those who would pick SPECKS, would you pay a single penny to avoid the dust specks?"

Yes. Note that, for the obvious next question, I cannot think of an amount of money large enough such that I would rather keep it than use it to save a person from torture. Assuming that this is post-Singularity money which I cannot spend on other life-saving or torture-stopping efforts.

"You probably wouldn't blind everyone on earth to save that one person from being tortured, and yet, there are (3^^^3)/(10^17) >> 7*10^9 people being blinded for each person you have saved from torture."

This is cheating, to put it bluntly- my utility function does not assign the same value to blinding someone and putting six billion dust specks in everyone's eye, even though six billion specks are enough to blind people if you force them into their eyes all at once.

"I'd still take the former. (10**(10**100))/(3^^^3) is still so close to zero that there's no way I can tell the difference without getting a larger universe for storing my memory first."

The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Comment author: Pete_Carlton 30 October 2007 08:28:00PM 3 points [-]

My algorithm goes like this:
there are two variables, X and Y.
Adding a single additional dust speck to a person's eye over their entire lifetime increases X by 1 for every person this happens to.
A person being tortured for a few minutes increases Y by 1.

I would object to most situations where Y is greater than 1. But I have no preferences at all with regard to X.

See? Dust specks and torture are not the same. I do not lump them together as "disutility". To do so seems to me a preposterous oversimplification. In any case, it has to be argued that they are the same. If you assume they're the same, then you're just assuming the torture answer when you state the question - it's not a problem of ethical philosophy but a problem of addition.

Comment author: Mike7 30 October 2007 08:38:00PM 1 point [-]

I am not convinced that this question can be converted into a personal choice where you face the decision of whether to take the speck or a 1/3^^^3 chance of being tortured. I would avoid the speck and take my chances with torture, and I think that is indeed an obvious choice.

I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.

Comment author: Recovering_irrationalist 30 October 2007 09:32:00PM 0 points [-]

1/3^^^3 chance of being tortured... If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day, I would always choose the speck, because I would never want to endure the inevitable 50 years of torture.

That wouldn't make it inevitable. You could get away with it, but then you could get multiple tortures. Rolling 6 dice often won't get exactly one "1".

Comment author: mr._M. 30 October 2007 09:37:00PM 0 points [-]

Answer depends on the person's POV on consciousness.

Comment author: Sebastian_Hagen2 30 October 2007 09:37:00PM 0 points [-]

Tom McCabe wrote:
The probability is effectively much greater than that, because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)). abs(U(torture)) would also get reduced, but by a much smaller factor, because the number is much smaller to begin with.

Is there something wrong with viewing this from the perspective of the affected individuals (unique or not)? For any individual instance of a person, the probability of directly experiencing the torture is (10**(10**100))/(3^^^3), regardless of how many identical copies of this person exist.


Mike wrote:
I think a more apposite application of that translation might be:
If I knew I was going to live for 3^^^3+50*365 days, and I was faced with that choice every day ...

I'm wondering how you would phrase the daily choice in this case, to get the properties you want. Perhaps like this:
1.) Add a period of (50*365)/3^^^3 days to the time period you will be tortured at the end of your life.
2.) Get a speck.

This isn't quite the same as the original question, as it gives choices between the two extremes. And in practice, this could get rather annoying, as just having to answer the question would be similarly bad to getting a speck. Leaving that aside, however, I'd still take the (ridiculously short) torture every day.

The difference is that framing the question as a one-off individual choice obscures the fact that in the example proffered, the torture is a certainty.
I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.

Comment author: Recovering_irrationalist 30 October 2007 09:57:00PM 0 points [-]

because of complexity compression. If you have 3^^^^3 people with dust specks, almost all of them will be identical copies of each other, greatly reducing abs(U(specks)).

If so, I want my anti-wish back. Evil Genie never said anything about compression. No wonder he has so many people to dust. I'm complaining to GOD Over Djinn.

If they're not compressed, surely a copy will still experience qualia? Does it matter that it's identical to another? If the sum experience of many copies is weighted as if there was just one, then I'm officially converting from infinite set agnostic to infinite set atheist.

Comment author: Eliezer_Yudkowsky 30 October 2007 10:11:00PM 0 points [-]

Bayesianism, Infinite Decisions, and Binding replies to Vann McGee's "An airtight dutch book", defending the permissibility of an unbounded utility function.

An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases.

If you allow a discontinuity where the utility of the infinite case is not the same as the limit of the utilities of the finite cases, then you have to allow a corresponding discontinuity in planning where the rational infinite plan is not the limit of the rational finite plans.

Comment author: douglas 30 October 2007 10:21:00PM 0 points [-]

It is clearly not so easy to have a non-subjective determination of utility.
After some thought I pick the torture. That is because the concept of 3^^^3 people means that no evolution will occur while that many people live. The one advantage to death is that it allows for evolution. It seems likely that we will have evovled into much more interesting life forms long before 3^^^3 of us have passed.
What's the utility of that?

Comment author: Mike7 30 October 2007 10:56:00PM 0 points [-]

Recovering Irrationalist:
True: my expected value would be 50 years of torture, but I don't think that changes my argument much.

Sebastian:
I'm not sure I understand what you're trying to say. (50*365)/3^^^3 (which is basically the same thing as 1/3^^^3) days of torture wouldn't be anything at all, because it wouldn't be noticeable. I don't think you can divide time to that extent from the point of view of human consciousness.

I don't think the math in my personal utility-estimation algorithm works out significantly differently depending on which of the cases is chosen.
To the extent that you think that and it is reasonable, I suppose that would undermine my argument that the personal choice framework is the wrong way of looking at the question. I would choose the speck every day, and it seems like a clear choice to me, but perhaps that just reflects that I have the bias this thought experiment was meant to bring out.

Comment author: Eliezer_Yudkowsky 30 October 2007 11:28:00PM 6 points [-]

I'll go ahead and reveal my answer now: Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity.

Some comments:

While some people tried to appeal to non-linear aggregation, you would have to appeal to a non-linear aggregation which was non-linear enough to reduce 3^^^3 to a small constant. In other words it has to be effectively flat. And I doubt they would have said anything different if I'd said 3^^^^3.

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person. The only person who could be "acclimating" to 3^^^3 is you, a bystander who is insensitive to the inconceivably vast scope.

Scope insensitivity - extremely sublinear aggregation by individuals considering bad events happening to many people - can lead to mass defection in a multiplayer prisoner's dilemma even by altruists who would normally cooperate. Suppose I can go skydiving today but this causes the world to get warmer by 0.000001 degree Celsius. This poses very little annoyance to any individual, and my utility function aggregates sublinearly over individuals, so I conclude that it's best to go skydiving. Then a billion people go skydiving and we all catch on fire. Which exact person in the chain should first refuse?

I may be influenced by having previously dealt with existential risks and people's tendency to ignore them.

Comment author: Kaj_Sotala 30 October 2007 11:45:00PM 3 points [-]

If anything is aggregating nonlinearly it should be the 50 years of torture, to which one person has the opportunity to acclimate; there is no individual acclimatization to the dust specks because each dust speck occurs to a different person

I find this reasoning problematic, because in the dust specks there is effectively nothing to acclimate to... the amount of inconvenience to the individual will always be smaller in the speck scenario (excluding secondary effects, such as the individual being distracted and ending up in a car crash, of course).

Which exact person in the chain should first refuse?

Now, this is considerably better reasoning - however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded "you among many have to make the following choice...", I could agree with you. But the current wording implied that it was once-a-universe sort of choice.

Comment author: RobinHanson 30 October 2007 11:52:00PM 1 point [-]

Well as long as we've gone to all the trouble to collect 85 comments on this topic, this seems like an great chance for a disagreement case study. It would be interesting to collect stats on who takes what side, and to relate that to their various kinds of relevant expertize. For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

Comment author: Constant2 31 October 2007 12:07:00AM 3 points [-]

The diagnosis of scope insensitivity presupposes that people are trying to perform a utilitarian calculation and failing. But there is an ordinary sense in which a sufficiently small harm is no wrong. A harm must reach a certain threshold before the victim is willing to bear the cost of seeking redress. Harms that fall below the threshold are shrugged off. And an unenforced law is no law. This holds even as the victims multiply. A class action lawsuit is possible, summing the minuscule harms, but our moral intuitions are probably not based on those.

Comment author: Eliezer_Yudkowsky 31 October 2007 12:07:00AM 0 points [-]

Now, this is considerably better reasoning - however, there was no clue to this being a decision that would be selected over and over by countless of people. Had it been worded "you among many have to make the following choice...", I could agree with you. But the current wording implied that it was once-a-universe sort of choice.

The choice doesn't have to be repeated to present you with the dilemma. Since all elements of the problem are finite - not countless, finite - if you refuse all actions in the chain, you should also refuse the start of the chain even when no future repetitions are presented as options. This kind of reasoning doesn't work for infinite cases, but it works for finite ones.

One potential counter to the "global heating" example is that at some point, people begin to die who would not otherwise have done so, and that should be the point of refusal. But for the case of dust specks - and we can imagine getting more than one dust speck in your eye per day - it doesn't seem like there should be any sharp borderline.

We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves.

There is no "special penny" where this logic goes from good to bad. It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity.

Comment author: Eliezer_Yudkowsky 31 October 2007 12:17:00AM 0 points [-]

Actually, that was a poor example because taxing one penny has side effects. I would rather save one life and everyone in the world poked with a stick with no other side effects, because I put a substantial probability on lifespans being longer than many might anticipate. So even repeating this six billion times to save everyone's life at the price of 120 years of being repeatedly poked with a stick, would still be a good bargain.

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn't change the normative shape of the problem (IMHO).

Comment author: Paul_Gowder 31 October 2007 12:34:00AM 14 points [-]

Robin: dare I suggest that one area of relevant expertise is normative philosophy for-@#%(*^*^$-sake?!

It's just painful -- really, really, painful -- to see dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts! My model for the torture case is swiftly becoming fifty years of reading the comments to this post.

The "obviousness" of the dust mote answer to people like Robin, Eliezer, and many commenters depends on the following three claims:

a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)

c) it is a moral fact that we ought to select the world with more pleasure and less pain.

But each of those three claims is hotly, hotly contested. And almost nobody who has ever thought about the questions seriously believes all three. I expect there are a few (has anyone posed the three beliefs in that form to Peter Singer?), but, man, if you're a Bayesian and you update your beliefs about those three claims based on the general opinions of people with expertise in the relevant area, well, you ain't accepting all three. No way, no how.

Comment author: Constant2 31 October 2007 12:57:00AM 0 points [-]

dozens of comments filled with blinkered nonsense like "the contradiction between intuition and philosophical conclusion" when the alleged "philosophical conclusion" hinges on some ridiculous simplistic Benthamite utilitarianism that nobody outside of certain economics departments and insular technocratic computer-geek blog communities actually accepts!

You've quoted one of the few comments which your criticism does not apply to. I carry no water for utilitarian philosophy and was here highlighting its failure to capture moral intuition.

Comment author: Nick_Tarleton 31 October 2007 12:58:00AM 0 points [-]

all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension)

Is a consistent and complete preference ordering without this property possible?

Comment author: Tom_McCabe 31 October 2007 01:07:00AM 1 point [-]

"An option that dominates in finite cases will always provably be part of the maximal option in finite problems; but in infinite problems, where there is no maximal option, the dominance of the option for the infinite case does not follow from its dominance in all finite cases."

From Peter's proof, it seems like you should be able to prove that an arbitrarily large (but finite) utility function will be dominated by events with arbitrarily large (but finite) improbabilities.

"Robin Hanson was correct, I do think that TORTURE is the obvious option, and I think the main instinct behind SPECKS is scope insensitivity."

And so we come to the billion-dollar question: Will scope insensitivity of this type be eliminated under CEV? So far as I can tell, a utility function is arbitrary; there is no truth which destroys it, and so the FAI will be unable to change around our renormalized utility functions by correcting for factual inaccuracy.

"Which exact person in the chain should first refuse?"

The point at which the negative utility of people catching on fire exceeds the positive utility of skydiving. If the temperature is 20 C, nobody will notice an increase of 0.00000001 C. If the temperature is 70 C, the aggregate negative utility could start to outweigh the positive utility. This is not a new idea; see http://en.wikipedia.org/wiki/Tragedy_of_the_commons.

"We face the real-world analogue of this problem every day, when we decide whether to tax everyone in the First World one penny in order to save one starving African child by mounting a large military rescue operation that swoops in, takes the one child, and leaves."

According to http://www.wider.unu.edu/research/2006-2007/2006-2007-1/wider-wdhw-launch-5-12-2006/wider-wdhw-press-release-5-12-2006.pdf, 10% of the world's adults, around 400 million people, own 85% of the world's wealth. Taxing them each one penny would give a total of $4 million, more than enough to mount this kind of a rescue operation. While incredibly wasteful, this would actually be *preferable* to some of the stuff we spend our money on; my local school district just voted to spend $9 million (current US dollars) to build a swimming pool. I don't even want to know how much we spend on $200 pants; probably more than $9 million in my town alone.

Comment author: Laura 31 October 2007 01:13:00AM 3 points [-]

Elizer: "It's wrong when repeated because it's also wrong in the individual case. You just have to come to terms with scope sensitivity."

But determining whether or not a decision is right or wrong in the individual case requires that you be able to place a value on each outcome. We determine this value in part by using our knowledge of how frequently the outcomes occur and how much time/effort/money it takes to prevent or assuage them. Thus knowing the frequency that we can expect an event to occur is integral to assigning it a value in the first place. The reason it would be wrong in the individual case to tax everyone in the first world the penny to save one African child is that there are so many starving children that doing the same for each one would become very expensive. It would not be obvious, however, if there was only one child in the world that needed rescuing. The value of life would increase because we could afford it to if people didn't die so frequently.

People in a village might be willing to help pay the costs when someone's house burns down. If 20 houses in the village burned down, the people might still contribute, but it is unlikely they will contribute 20 times as much. If house-burning became a rampant problem, people might stop contributing entirely, because it would seem futile for them to do so. Is this necessarily scope insensitivity? Or is it reasonable to determine values based on frequencies we can realistically expect?

Comment author: Kaj_Sotala 31 October 2007 01:14:00AM 1 point [-]

Where there are no special inflection points, a bad repeated action should be a bad individual action, a good repeated action should be a good individual action. Talking about the repeated case changes your intuitions and gets around your scope insensitivity, it doesn't change the normative shape of the problem (IMHO).

Hmm, I see your point. I can't help like feeling that there are cases where repetition does matter, though. For instance, assuming for a moment that radical life-extension and the Singularity and all that won't happen, and assuming that we consider humanity's continued existence to be a valuable thing - how about the choice of having/not having children? Not having children causes a very small harm to everybody else in the same generation (they'll have less people supporting them when old). Doesn't your reasoning imply that every couple should be forced into having children even if they weren't of the type who'd want that (the "torture" option), to avoid causing a small harm to all the others? This even though society could continue to function without major trouble even if a fraction of the population did choose to remain childfree, for as long as sufficiently many others had enough children?

Comment author: Paul_Gowder 31 October 2007 01:24:00AM 3 points [-]

Constant, my reference to your quote wasn't aimed at you or your opinions, but rather at the sort of view which declares that the silly calculation is some kind of accepted or coherent moral theory. Sorry if it came off the other way.

Nick, good question. Who says that we have consistent and complete preference orderings? Certainly we don't have them across people (consider social choice theory). Even to say that we have them within individual people is contestable. There's a really interesting literature in philosophy, for example, on the incommensurability of goods. (The best introduction of which I'm aware consists in the essays in Ruth Chang, ed. 1997. _Incommensurability, Incomparability, and Practical Reason_ Cambridge: Harvard University Press.)

That being said, it might be possible to have complete and consistent preference orderings with qualitative differences between kinds of pain, such that any amount of torture is worse than any amount of dust-speck-in-eye. And there are even utilitarian theories that incorporate that sort of difference. (See chapter 2 of John Stuart Mill's _Utilitarianism_, where he argues that intellectual pleasures are qualitatively superior to more base kinds. Many indeed interpret that chapter to suggest that any amount of an intellectual pleasure outweighs any amount of drinking, sex, chocolate, etc.) Which just goes to show that even utilitarians might not find the torture choice "obvious," if they deny b) like Mill.

Comment author: Recovering_irrationalist 31 October 2007 01:56:00AM -1 points [-]

Who says that we have consistent and complete preference orderings?

Who says you need them? The question wasn't to quantify an exact balance. You just need to be sure enough to make the decision that one side outweighs the other for the numbers involved.

By my values, all else equal, for all x between 1 millisecond and fifty years, 10^1000 people being tortured for time x is worse than one person being tortured for time x*2. Would you disagree?

So, 10^1000 people tortured for (fifty years)/2 is worse than one person tortured for fifty years.
Then, 10^2000 people tortured for (fifty years)/4 is worse than one person tortured for fifty years.

You see where I'm going with this. Do something similar with the dust specs and unless I prefer countless people getting countless years of intense dust harassment to one person getting a millisecond of pain, I vote torture.

I recognize this is my opinion and relies on your c) it is a moral fact that we ought to select the world with more pleasure and less pain not being hopelessly outweighed by another criteria. I think this is definitely a worthwhile thing to debate and that your input would be extremely valuable.

Comment author: mitchell_porter2 31 October 2007 02:07:00AM 2 points [-]

Since Robin is interested in data... I chose SPECKS, and was shocked by the people who chose TORTURE on grounds of aggregated utility. I had not considered the possibility that a speck in the eye might cause a car crash (etc) for some of those 3^^^3 people, and it is the only reason I see for revising my original choice. I have no accredited expertise in anything relevant, but I know what decision theory is.

I see a widespread assumption that everything has a finite utility, and so no matter how much worse X is than Y, there must be a situation in which it is better to have one person experiencing X, rather than a large number of people experiencing Y. And it looks to me as if this assumption derives from nothing more than a particular formalism. In fact, it is extremely easy to have a utility function in which X unconditionally trumps Y, while still being quantitatively commensurable with some other option X'. You could do it with delta functions, for example. You would use ordinary scalars to represent the least important things to have preferences about, scalar multiples of a delta function to represent the utilities of things which are unconditionally more important than those, scalar multiples of a delta function squared to represent things that are even more important, and so on.

The qualitative distinction I would appeal to here could be dubbed pain versus inconvenience. A speck of dust in your eye is not pain. Torture, especially fifty years of it, is.

Comment author: Zubon 31 October 2007 02:13:00AM 7 points [-]

Eliezer, a problem seems to be that the speck does not serve the function you want it to in this example, at least not for all readers. In this case, many people see a special penny because there is some threshold value below which the least bad bad thing is not really bad. The speck is intended to be an example of the least bad bad thing, but we give it a badness rating of one minus .9-repeating.

(This seems to happen to a lot of arguments. "Take x, which is y." Well, no, x is not quite y, so the argument breaks down and the discussion follows some tangent. The Distributed Republic had a good post on this, but I cannot find it.)

We have a special penny because there is some amount of eye dust that becomes noticeable and could genuinely qualify as the least bad bad thing. If everyone on Earth gets this decision at once, and everyone suddenly gets >6,000,000,000 specks, that might be enough to crush all our skulls (how much does a speck weigh?). Somewhere between that and "one speck, one blink, ever" is a special penny.

If we can just stipulate "the smallest unit of suffering (or negative qualia, or your favorite term)," then we can move on to the more interesting parts of the discussion.

I also see a qualitative difference if there can be secondary effects or summation causes secondary effects. As noted above, if 3^^^3/10^20 people die due to freakishly unlikely accidents caused by blinking, the choice becomes trivial. Similarly, +0.000001C sums somewhat differently than specks. 1 speck/day/person for 3^^^3 days is still not an existential risk; 3^^^3 specks at once will kill everyone.

(I still say Kyle wins.)

Comment author: Pete_Carlton 31 October 2007 02:28:00AM 3 points [-]

Okay, here's the data: I choose SPECKS, and here is my background and reasons.

I am a cell biologist. That is perhaps not relevant.

My reasoning is that I do not think that there is much meaning in adding up individual instances of dust specks. Those of you who choose TORTURE seem to think that there is a net disutility that you obtain by multiplying epsilon by 3^^^3. This is obviously greater than the disutility of torturing one person.
I reject the premise that there is a meaningful sense in which these dust specks can "add up".

You can think in terms of biological inputs - simplifying, you can imagine a system with two registers. A dust speck in the eye raises register A by epsilon. Register A also resets to zero if a minute goes by without any dust specks. Torture immediately sets register B to 10. I am morally obliged to intervene if register B ever goes above 1. In this scheme register A is a morally irrelevant register. It trades in different units than register B. No matter how many instances of A*epsilon there are, it does not warrant intervention.

You are making a huge, unargued assumption if you treat both torture and dust-specks in equivalent terms of "disutility". I accept your question and argue for "SPECKS" by rejecting your premise of like units (which does make the question trivial). But I sympathize with people who reject your question outright.

Comment author: Eliezer_Yudkowsky 31 October 2007 02:34:00AM 3 points [-]

Mitchell, I acknowledge the defensibility of the position that there are tiers of incommensurable utilities. But to me it seems that the dust speck is a very, very small amount of badness, yet badness nonetheless. And that by the time it's multiplied to ~3^^^3 lifetimes of blinking, the badness should become incomprehensibly huge just like 3^^^3 is an incomprehensibly huge number.

One reason I have problems with assigning a hyperreal infinitesimal badness to the speck, is that it (a) doesn't seem like a good description of psychology (b) leads to total loss of that preference in smarter minds.

(B) If the value I assign to the momentary irritation of a dust speck is less than 1/3^^^3 the value of 50 years' torture, then I will never even bother to blink away the dust speck because I could spend the thought or the muscular movement on my eye on something with a better than 1/3^^^3 chance of saving someone from torture.

(A) People often also think that money, a mundane value, is incommensurate with human life, a sacred value, even though they very definitely don't attach infinitesimal value to money.

I think that what we're dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility, than anyone actually possessing a consistent utility function using hyperreal infinitesimal numbers.

The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending.

Comment author: Tom_McCabe 31 October 2007 02:43:00AM 0 points [-]

"The notion of sacred values seems to lead to irrationality in a lot of cases, some of it gross irrationality like scope neglect over human lives and "Can't Say No" spending."

Could you post a scenario where most people would choose the option which unambiguously causes greater harm, without getting into these kinds of debates about what "harm" means? Eg., where option A ends with shooting one person, and option B ends with shooting ten people, but option B sounds better initially? We have a hard enough time getting rid of irrationality, even in cases where we know what *is* rational.

Comment author: mitchell_porter2 31 October 2007 02:43:00AM 1 point [-]

Eliezer: Why does anything have a utility at all? Let us suppose there are some things to which we attribute an intrinsic utility, negative or positive - those are our moral absolutes - and that there are others which only have a derivative utility, deriving from the intrinsic utility of some of their consequences. This is certainly one way to get incommensurables. If pain has intrinsic disutility and inconvenience does not, then no finite quantity of inconvenience can by itself trump the imperative of minimizing pain. But if the inconvenience might give rise to consequences with intrinsic disutility, that's different.

Comment author: Brandon_Reinhart 31 October 2007 02:52:00AM 0 points [-]

Dare I say that people may be overvaluing 50 years of a single human life? We know for a fact that some effect will be multiplied by 3^^^3 by our choice. We have no idea what strange an unexpected existential side effects this may have. It's worth avoiding the risk. If the question were posed with more detail, or specific limitations on the nature of the effects, we might be able to answer more confidently. But to risk not only human civilization, but ALL POSSIBLE CIVILIZATIONS, you must be DAMN SURE you are right. 3^^^3 makes even incredibly small doubts significant.

Comment author: Brandon_Reinhart 31 October 2007 02:57:00AM 1 point [-]

I wonder if my answers make me fail some kind of test of AI friendliness. What would the friendly AI do in this situation? Probably write poetry.

Comment author: Laura 31 October 2007 04:14:00AM 0 points [-]

For Robin's statistics:
Given no other data but the choice, I would have to choose torture. If we don't know anything about the consequences of the blinking or how many times the choice is being made, we can't know that we are not causing huge amounts of harm. If the question deliberately eliminated these unknowns- ie the badness was limited to an eyeblink that does not immediately result in some disaster for someone or blindness for another, and you really are the one and only person making the choice ever, then I'd go with the dust-- But these qualifications are huge when you consider 3^^^3. How can we say the eyeblink didn't distract a surgeon and cause a slip of his knife? Given enough trials, something like that is bound to happen.

Comment author: Benquo 31 October 2007 04:22:00AM 0 points [-]

@Paul, I was trying to find a solution that didn't assume "b) all types of pleasures and pains are commensurable such that for all i, j, given a quantity of pleasure/pain experience i, you can find a quantity of pleasure/pain experience j that is equal to (or greater or less than) it. (i.e. that pleasures and pains exist on one dimension).", but rather established it for the case at hand. Unless it's specifically stated in the hypothetical that this is a true 1-shot choice (which we know it isn't in the real world, as we make analogous choices all the time), I think it's legitimate to assume the aggregate result of the test repeated by everyone. Thus, I'm not invoking utilitarian calculation, but Kantian absolutism! ;) I mean to appeal to your practical intuition by suggesting that a constant barrage of specks will create an experience of a like kind with torture.

@Robin Hanson, what little expertise I have is in the liberal arts and sciences; Euclid and Ptolemy, Aristotle and Kant, Einstein and Sophocles, etc.

Comment author: Paul_Gowder 31 October 2007 05:17:00AM 0 points [-]

Eliezer -- I think the issues we're getting into now require discussion that's too involved to handle in the comments. Thus, I've composed my own post on this question. Would you please be so kind as to approve it?

Recovering irrationalist: I think the hopefully-forthcoming-post-of-my-own will constitute one kind of answer to your comment. One other might be that one can, in fact, prefer huge dust harassment to a little torture. Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

Comment author: Sebastian_Hagen2 31 October 2007 10:27:00AM 0 points [-]

For Robin's statistics:
Torture on the first problem, and torture again on the followup dilemma.

relevant expertise: I study probability theory, rationality and cognitive biases as a hobby. I don't claim any real expertise in any of these areas.

Comment author: Kaj_Sotala 31 October 2007 10:29:00AM 4 points [-]

I think one of the reasons I finally chose specks is because the unlike implied, the suffering does not simply "add up": 3^^^3 people getting one dust speck in their eye is most certainly not equal to one person getting 3^^^3 dust specks in his eyes. It's not "3^^^3 units of disutility, total", it's one unit of disutility per person.

That still doesn't really answer the "one person for 50 years or two people for 49 years" question, though - by my reasoning, the second option would be preferrable, while obviously the first option is the preferrable one. I might need to come up with a guideline stating that only experiences of suffering within a few orders of magnitude are directly comparable with each other, or some such, but it does feel like a crude hack. Ah well.

If statistics are being gathered, I'm a second year cognitive science student.

Comment author: John_Mark_Rozendaal 31 October 2007 10:41:00AM 3 points [-]

It is my impression that human beings almost universally desire something like "justice" or "fairness." If everybody had the dust speck problem, it would hardly be percieved as a problem. If one person is beign tortured, both the tortured person and others percieve unfairness, and society has a problem with this.

Actually, we all DO get dust motes in our eyes from time to time, and this is not a public policy issue.
In fact relatively small numbers of people ARE being tortured today, and this is a big problem both for the victims and for people who care about justice.

Comment author: John_Mark_Rozendaal 31 October 2007 10:59:00AM -1 points [-]

Beyond the distracting arithmetic lesson, this question reeks of Christianity, positing a situation in which one person's suffering can take away the suffering of others.

Comment author: pdf23ds 31 October 2007 11:48:00AM 2 points [-]

For the moment I disturbed by the fact that Eliezer and I seem to be in a minority here, but comforted a bit by the fact that we seem to know decision theory better than most. But I'm open to new data on the balance of opinion and the balance of relevant expertize.

It seems like selection bias might make this data much less useful. (It applied it my case, at least.) The people who chose TORTURE were likely among those with the most familiarity with Eliezer's writings, and so were able to predict that he would agree with them, and so felt less inclined to respond. Also, voicing their opinion would be publicly taking an unpopular position, which people instinctively shy away from.

Comment author: Recovering_irrationalist 31 October 2007 01:30:00PM 0 points [-]

Paul: Yet a third might be that we can't aggregate the pain of dust harassment across people, so that there's some amount of single-person dust harassment that will be worse than some amount of torture, but if we spread that out, it's not.

My induction argument covers that. As long as, all else equal, you believe:

  • A googolplex people tortured for time x is worse than one person tortured for time x+0.00001%.

  • A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect.

  • A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable.

  • If a is worse than b and b is worse than c then a is worse than c.


...you can show that all else equal, to reduce suffering you pick TORTURE. As far as I can see anyway, I've been wrong before. Again, I acknowledge that it depends on how much you care about reducing suffering compared to other concerns, such as an arbitrary cut-off point, abhoration to using maths to answer such questions, or sacred values, which certainly can have utility by keeping worse irrationalities in check.

Comment author: Nick_Tarleton 31 October 2007 01:48:00PM 1 point [-]

A googolplex people being dust speckled every second of their life without further ill effect

I don't think this is directly comparable, because the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly - if I have to blink every second for an hour, I'll probably get extremely frustrated on top of the slight discomfort of the specks themselves. I would say that one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

Comment author: Recovering_irrationalist 31 October 2007 02:00:00PM 0 points [-]

the disutility of additional dust specking to one person in a short period of time probably grows faster than linearly

That's why I used a googolplex people to balance the growth. All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

one person getting specked every second of their life is significantly worse than a couple billion people getting specked once.

I agree. I never said it wasn't.

Have to run - will elaborate later.

Comment author: Nick_Tarleton 31 October 2007 02:27:00PM 1 point [-]

All else equal, do you disagree with: "A googolplex people dust specked x times during their lifetime without further ill effect is worse than one person dust specked for x*2 times during their lifetime without further ill effect" for the range concerned?

I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable". (Unless "a further ill effect" is meant to exclude not only car accidents but superlinear personal emotional effects, but that would be stupid.)

* 1 billion seconds = 31.7 years

I think that what we're dealing here is more like the irrationality of trying to impose and rationalize comfortable moral absolutes in defiance of expected utility

Since real problems never possess the degree of certainty that this dilemma does, holding certain heuristics as absolutes may be the utility-maximizing thing to do. In a realistic version of this problem, you would have to consider the results of empowering whatever agent is doing this to torture people with supposedly good but nonverifiable results. If it's a human or group of humans, not such a good idea; if it's a Friendly AI, maybe you can trust it but can't it figure out a better way to achieve the result? (There is a Pascal's Mugging problem here.)

One more thing for TORTURErs to think about: if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer? I lean towards SPECK on considering this, although I'm less sure about the case of torturing 3^^^3 people for a minute each vs. 1 person for 50 years.

Comment author: Psy-Kosh 31 October 2007 05:13:00PM 0 points [-]

Just thought I'd comment that the more I think about the question, the more confusing it becomes. I'm inclined to think that if we consider the max utility state of every person having maximal fulfilment, and a "dust speck" as the minimal amount of "unfulfilment" from the top a person can experience, then two people experiencing a single "dust speck" is not quite as bad as a sigle person two "dust specks" below optimal. I think the reason I'm thinking that is that the second speck takes away more proportionally than the first speck did.

Oh, one other thing. I was assuming for my replies both here and in the other thread that we're only talking about the actual "moment of suffering" caused by a dust speck event, with no potential "side effects"

If we consider that those can have consequences, I'm pretty sure that on average those would be negative/harmful, and when the law of large numbers is invoked via stupendously large numbers, well, in that case I'm going with TORTURE.

For the moment at least. :)

Comment author: Recovering_irrationalist 31 October 2007 05:58:00PM 1 point [-]

I agree with that. My point is that agreeing that "A googolplex people being dust speckled every second of their life without further ill effect is worse than one person being horribly tortured for the shortest period experiencable" doesn't oblige me to agree that "A few billion* googolplexes of people being dust specked once without further ill effect is worse than one person being horribly tortured for the shortest period experiencable".

Neither would I, you don't need to. :-)

The only reason I can pull this off is because 3^^^3 is such a ludicrous number of people, allowing me to actually divide my army by a googolplex a silly number of times. You couldn't cut the series up fine enough with a mere six billion people.

If you agree with my first two statements listed, you can use them (and your vast googolplex-cutter-proof army) to infer a series of small steps from each of Eliezer's options, meeting in the middle at my third statement in the list. You then have a series of steps when a is worse than b, b than c, c than d, all the way from SPECS to my third statement to TORTURE.

If for some reason you object to one of the first 3 statements, my 3^^^3 vast hoard of minions will just cut the series up even finer.

If that's not clear it's probably my fault - I've never had to explain anything like this before.

if every one of those 3^^^3 people is willing to individually suffer a dust speck in order to prevent someone from suffering torture, is TORTURE still the right answer?

I sure would, but I wouldn't ask 3^^^3 others to.

Comment author: jonvon 31 October 2007 06:03:00PM 0 points [-]

ok, without reading the above comments... (i did read a few of them, including robin hanson's first comment - don't know if he weighed in again).

dust specks over torture.

the apparatus of the eye handles dust specks all day long. i just blinked. it's quite possible there was a dust speck in there somewhere. i just don't see how that adds up to anything, even if a very large number is invoked. in fact with a very large number like the one described it is likely that human beings would evolve more efficient tear ducts, or faster blinking, or something like that. we would adapt and be stronger.

torturing one person for fifty years however puts a stain on the whole human race. it affects all of us, even if the torture is carried out fifty miles underground in complete secrecy.

Comment author: Paul_Gowder 31 October 2007 06:46:00PM 1 point [-]

Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tortureness of torture is that it lasts a while).

Comment author: Neel_Krishnaswami 31 October 2007 07:04:00PM 2 points [-]

Eliezer, both you and Robin are assuming the additivity of utility. This is not justifiable, because it is false for any computationally feasible rational agent.

If you have a bounded amount of computation to make a decision, we can see that the number of distinctions a utility function can make is in turn bounded. Concretely, if you have N bits of memory, a utility function using that much memory can distinguish at most 2^N states. Obviously, this is not compatible with additivity of disutility, because by picking enough people you can identify more distinct states than the 2^N distinctions your computational process can make.

Now, the reason for adopting additivity comes from the intuition that 1) hurting two people is at least as bad as hurting one, and 2) that people are morally equal, so that it doesn't matter which people are hurt. Note that these intuitions mathematically only require that harm should be monotone in the number of people with dust specks in their eyes. Furthermore, this requirement is compatible with the finite computation requrements -- it implies that there is a finite number of specks beyond which disutility does not increase.

If we want to generalize away from the specific number N of bits we have available, we can take an order-theoretic viewpoint, and simply require that all increasing chains of utilities have limits. (As an aside, this idea lies at the heart of the denotational semantics of programming languages.) This forms a natural restriction on the domain of utility functions, corresponding to the idea that utility functions are bounded.

Comment author: Tom_Breton 31 October 2007 08:00:00PM 4 points [-]

It's truly amazing the contortions many people have gone through rather than appear to endorse torture. I see many attempts to redefine the question, categorical answers that basically ignore the scalar, and what Eliezer called "motivated continuation".

One type of dodge in particular caught my attention. Paul Gowder phrased it most clearly, so I'll use his text for reference:

...depends on the following three claims:

a) you can unproblematically aggregate pleasure and pain across time, space, and individuality,

"Unproblematically" vastly overstates what is required here. The question doesn't require unproblematic aggregation; any slight tendency of aggregation will do just fine. We could stipulate that pain aggregates as the hundredth root of N and the question would still have the same answer. That is an insanely modest assumption, ie that it takes 2^100 people having a dust mote before we can be sure there is twice as much suffering as for one person having a dust mote.

"b" is actually inapplicable to the stated question and it's "a" again anyways - just add "type" or "mode" to the second conjunction in "a".

c) it is a moral fact that we ought to select the world with more pleasure and less pain.

I see only three possibilities for challenging this, none of which affects the question at hand.

  • Favor a desideratum that roughly aligns with "pleasure" but not quite, such as "health". Not a problem.
  • Focus on some special situation where paining others is arguably desirable, such as deterrence, "negative reinforcement", or retributive justice. ISTM that's already been idealized away in the question formulation.
  • Just don't care about others' utility, eg Rand-style selfishness.
Comment author: Recovering_irrationalist 31 October 2007 08:11:00PM 0 points [-]

Recovering irrationalist: in your induction argument, my first stab would be to deny the last premise (transitivity of moral judgments). I'm not sure why moral judgments have to be transitive.

I acknowledged it won't hold for every moral. There are some pretty barking ones out there. I say it holds for choosing the option that creates less suffering. For finite values, transitivity should work fine.

Next, I'd deny the second-to-last premise (for one thing, I don't know what it means to be horribly tortured for the shortest period possible -- part of the tortureness of torture is that it lasts a while).

Fine, I still have plenty of googolplex-divisions left. Cut the series as fine as you like. Have billions of intervening levels of discomfort from spec->itch->ouch->"fifty years of reading the comments to this post." The point is if you slowly morph from TORTURE to SPEC in very small steps, every step gets worse because the population multiplies enormously while the pain differs by a incredibly tiny amount.

Comment author: Eliezer_Yudkowsky 31 October 2007 08:24:00PM 0 points [-]

Recovering irrationalist, I hadn't thought of things in precisely that way - just "3^^4 is really damn big, never mind 3^^7625597484987" - but now that you point it out, the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

It only requires 3^^5 = 3^(3^7625597484987) to get more googolplex factors than you can shake a stick at. But why not use a googol instead of a googolplex, so we can stick with 3^^4? If anything, the case is more persuasive with a googol because a googol is more comprehensible than a googolplex. It's all about scope neglect, remember - googolplex just fades into a featureless big number, but a googol is ten thousand trillion trillion trillion trillion trillion trillion trillion trillion.

Comment author: Neel_Krishnaswami 31 October 2007 08:41:00PM 3 points [-]

Tom, your claim is false. Consider the disutility function

D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

Now, with this function, disutility increases monotonically with the number of people with specks in their eyes, satisfying your "slight aggregation" requirement. However, it's also easy to see that going from 0 to 1 person tortured is worse than going from 0 to any number of people getting dust specks in their eyes, including 3^^^3.

The basic objection to this kind of functional form is that it's not additive. However, it's wrong to assume an additive form, because that assumption mandates unbounded utilities, which are a bad idea, because they are not computationally realistic and admit Dutch books. With bounded utility functions, you have to confront the aggregation problem head-on, and depending on how you choose to do it, you can get different answers. Decision theory does not affirmatively tell you how to judge this problem. If you think it does, then you're wrong.

Comment author: Eliezer_Yudkowsky 31 October 2007 08:48:00PM 0 points [-]

Again, not everyone agrees with the argument that unbounded utility functions give rise to Dutch books. Unbounded utilities only admit Dutch books if you do allow a discontinuity between infinite rewards and the limit of increasing finite awards, but you don't allow a discontinuity between infinite planning and the limit of increasing finite plans.

Comment author: Silas 31 October 2007 08:58:00PM 1 point [-]

Oh geez. Originally I had considered this question uninteresting so I ignored it, but considering the increasing devotion to it in later posts, I guess I should give my answer.

My justification, but not my answer, depends upon what how the change is made.

-If the offer is made to all of humanity before being implemented ("Do you want to be the 'lots of people get specks race' or the 'one guy gets severe torture' race?") I believe people could all agree to the specks by "buying out" whoever eventually gets the torture. For an immeasurably small amount, less than the pain of a speck, they can together amass funds sufficient to return the torture to the indivdual's indifference curve. OTOH, the person getting the torture couldn't possibly buy out that many people. (In other words, the specks are Kaldor-Hicks efficient.)

-If the offer, at my decision, would just be thrown onto humanity without possibly of advance negotation, I would still take the specks because even if only people who feel bad for the tortured make a small contribution, it will still comparable to what they had to offer in the above paragraph, such is the nature of large numbers of people.

I don't think this is the result of my revulsion toward the torture, although I have that. I think my decision stems from how such large (and superlinearly increasing) utility differences imply the possibility of "evening it out" through some transfer.

Comment author: Recovering_irrationalist 31 October 2007 09:09:00PM 1 point [-]

the argument by googolplex gradations seems to me like a much stronger version of the arguments I would have put forth.

You just warmed my heart for the day :-)

But why not use a googol instead of a googolplex

Shock and awe tactics. I wanted a featureless big number of featureless big numbers, to avoid wiggle-outs, and scream "your intuition ain't from these parts". In my head, FBNs always carry more weight than regular ones. Now you mention it, their gravity could get lightened by incomprehensibility, but we we're already counting to 3^^^3.

Googol is better. Less readers will have to google it.

Comment author: Tom_Breton 31 October 2007 09:39:00PM 0 points [-]

@Neel.

Then I only need to make the condition slightly stronger: "Any slight tendency to aggregation that doesn't beg the question." Ie, that doesn't place a mathematical upper limit on disutility(Specks) that is lower than disutility(Torture=1). I trust you can see how that would be simply begging the question. Your formulation:

D(Torture, Specks) = [10 * (Torture/(Torture + 1))] + (Specks/(Specks + 1))

...doesn't meet this test.

Contrary to what you think, it doesn't require unbounded utility. Limiting the lower bound of the range to (say) 2 * disutility(torture) will suffice. The rest of your message assumes it does.

For completeness, I note that introducing numbers comparable to 3^^^3 in an attempt to undo the 3^^^3 scaling would cause a formulation to fail the "slight" condition, modest though it is.

Comment author: Jef_Allbright 31 October 2007 10:24:00PM 0 points [-]

With so many so deep in reductionist thinking, I'm compelled to stir the pot by asking how one justifies the assumption that the SPECK is a net negative at all, aggregate or not, extended consequences or not? Wouldn't such a mild irritant, over such a vast and diverse population, act as an excellent stimulus for positive adaptations (non-genetic, of course) and likely positive extended consequences?

Comment author: Eliezer_Yudkowsky 31 October 2007 10:26:00PM 3 points [-]

A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today.

Comment author: Psy-Kosh 31 October 2007 10:31:00PM 1 point [-]

Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

More to the point, that and some other comments are starting to sway me away from the thought that disutility of single dust speck events per person becomes sublinear as people experiencing it increases (but total population is held constant)

I think if I made some errors, they were partly was caused by "I really don't want to say TORTURE", and partly caused by my mistaking the exact nature of the nonlinearity. I maintain "one person experiencing two dust specks" is not equal to, and actually worse, I think, than two people experiencing one dust speck, but now I'm starting to suspect that two people each experiencing one dust speck is exactly twice as bad as one person experiencing one dust speck. (Assuming, as we shift number of people experiencing DSE that we hold the total population constant.)

Thus, I'm going to tentatively shift my answer to TORTURE.

Comment author: Jef_Allbright 31 October 2007 10:35:00PM 2 points [-]

"A brilliant idea, Jef! I volunteer you to test it out. Start blowing dust around your house today."

Although only one person, I've already begun, and have entered in my inventor's notebook some apparently novel thinking on not only dust, but mites, dog hair, smart eyedrops, and nanobot swarms!

Comment author: g 31 October 2007 10:42:00PM 0 points [-]

Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture*1) is begging the question in favour of SPECKS then why isn't *not* having such an upper limit begging the question in favour of TORTURE?

I find it rather surprising that so many people agree that utility functions may be drastically nonlinear but are apparently *completely certain* that they know quite a bit about how they behave in cases as exotic as this one.

Comment author: Tom_Breton 01 November 2007 12:05:00AM 0 points [-]

Tom, if having an upper limit on disutility(Specks) that's lower than disutility(Torture*1) is begging the question in favour of SPECKS then why isn't *not* having such an upper limit begging the question in favour of TORTURE?

It should be obvious why. The constraint in the first one is neither argued for nor agreed on and by itself entails the conclusion being argued for. There's no such element in the second.

Comment author: g 01 November 2007 01:05:00AM 1 point [-]

I think we may be at cross purposes; my apologies if we are and it's my fault. Let me try to be clearer.

Any *particular* utility function (if it's real-valued and total) "begs the question" in the sense that it either prefers SPECKS to TORTURE, or prefers TORTURE to SPECKS, or puts them exactly equal. I don't see how this can possibly be considered a defect, but if it is one then *all* utility functions have it, not just ones that prefer SPECKS to TORTURE.

Saying "Clearly SPECKS is better than TORTURE, because here's my utility function and it says SPECKS is better" would be begging the question (absent arguments in support of that utility function). I don't see anyone doing that. Neel's saying "You can't rule out the possibility that SPECKS is better than TORTURE by saying that no real utility function prefers SPECKS, because here's one possible utility function that says SPECKS is better". So far as I can tell you're rejecting that argument on the grounds that any utility function that prefers SPECKS is ipso facto obviously unacceptable; that *is* begging the question.

Comment author: Neel_Krishnaswami 01 November 2007 01:29:00AM 5 points [-]

g: that's exactly what I'm saying. In fact, you can show something stronger than that.

Suppose that we have an agent with rational preferences, and who is minimally ethical, in the sense that they always prefer fewer people with dust specks in their eyes, and fewer people being tortured. This seems to be something everyone agrees on.

Now, because they have rational preferences, we know that a bounded utility function consistent with their preferences exists. Furthermore, the fact that they are minimally ethical implies that this function is monotone in the number of people being tortured, and monotone in the number of people with dust specks in their eyes. The combination of a bound on the utility function, plus the monotonicity of their preferences, means that the utility function has a well-defined limit as the number of people with specks in their eyes goes to infinity. However, the existence of the limit doesn't tell you what it is -- it may be any value within the bounds.

Concretely, we can supply utility functions that justify either choice, and are consistent with minimal ethics. (I'll assume the bound is the [0,1] interval.) In particular, all disutility functions of the form:

U(T, S) = A*(T/(T+1)) + B*(S/(S+1))

satisfy minimal ethics, for all positive A and B such that A plus B is less than one. Since A and B are free parameters, you can choose them to make either specks or torture preferred.

Likewise, Robin and Eliezer seem to have an implicit disutility function of the form

U_ER(T, S) = A*T + B*S

If you normalize to get [0,1] bounds, you can make something up like

U'(T, S) = (A*T + B*S)/(A*T + B*S + 1).

Now, note U' also satisfies minimal ethics, in that if T is set to 1, then in the limit as S goes to infinity, U' will still always go to one and exceed A/(A+1). So that's why they tend to have the intuition that torture is the right answer. (Incidentally, this disproves my suggestion that bounded utility functions vitiate the force of E's argument -- but the bounds proved helpful in the end by letting us use limit analysis. So my focus on this point was accidentally correct!)

Now, consider yet another disutility function,

U''(T,S) = (ST + S)/ (ST + S + 1)

This is also minimally ethical, and doesn't have any of the free parameters that Tom didn't like. But this function also always implies a preference for any number of dust specks to even a single instance of torture.

Basically, if you think the answer is obvious, then you have to make some additional assumptions about the structure of the aggregate preference relation.

Comment author: Wei_Dai2 01 November 2007 01:38:00AM 2 points [-]

I have argued in previous comments that the utility of a person should be discounted by his or her measure, which may be based on algorithmic complexity. If this "torture vs specks" dilemma is to have the same force under this assumption, we'd have to reword it a bit:

Would you prefer that the measure of people horribly tortured for fifty years increases by x/3^^^3, or that the measure of people who get dust specks in their eyes increases by x?

I argue that no one, not even a superintelligence, can actually face such a choice. Because x is at most 1, x/3^^^3 is at most 1/3^^^3. But how can you increase the measure of something by more than 0 but no more than 1/3^^^3? You might, perhaps, generate a random number between 0 and 3^^^3 and do something only if that random number is 0. But algorithmic information theory says that for any program (even a superintelligence), there are pseudorandom sequences that it cannot distinguish from truly random sequences, and the prior probability that your random number generator is generating such a pseudorandom sequence is much higher than 1/3^^^3. Therefore the probability of that "random" number being 0 (or being any other number that you can think of) is actually much larger than 1/3^^^3.

Therefore, if someone *tells* you "measure of ... increases by x/3^^^3", in your mind you've got to be thinking "... increases by y" for some y much larger than 1/3^^^3. I think my theories explains both those who answer SPECKS and those who say no answer is possible.

Comment author: Andrew_Macdonald 01 November 2007 01:47:00AM 13 points [-]

A consistent utilitarian would choose the torture, but I don't think it's the moral choice.

Let's bring this a little closer to home. Hypothetically, let's say you get to live your life again 3^^^3 times. Would you prefer to have an additional dust speck in your eye in each of your future lives, or else be tortured for 50 years in a single one of them?

Any takers for the torture?

Comment author: Recovering_irrationalist 01 November 2007 02:01:00AM 1 point [-]

I'll take it, as long as it's no more likely to be one of the earliest lives. I don't trust any universe that can make 3^^^3 of me not to be a simulation that would get pulled early.

Hrm... Recovering's induction argument is starting to sway me toward TORTURE.

Interesting. The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

I hope these ideas never get argued out of context!

Comment author: Eliezer_Yudkowsky 01 November 2007 02:01:00AM 2 points [-]

I'd take it.

Comment author: Caledonian2 01 November 2007 02:53:00AM 3 points [-]

Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

I'd rather accept one additional dust speck per lifetime in 3^^^3 lives than have one lifetime out of 3^^^3 lives involve fifty years of torture.

Of course, that's me saying that, with my single life. If I actually had that many lives to live, I might become so bored that I'd opt for the torture merely for a change of pace.

Comment author: Psy-Kosh 01 November 2007 03:07:00AM 0 points [-]

Recovering: *chuckles* no, I meant thinking about that, and rethinking about what the actual properties of what I'd consider to be a reasonable utility function led me to reject my earlier claim of the specific nonlinearity that lead to my assumption that as you increase the number of people that recieve a spec, the disutility is sublinear, and now I believe it to be linear. So huge bigbigbigbiggigantaenormous num specks would, of course, eventually have to have more disutility than the torture. But since to get to that point knuth arrow notation had to be invoked, I don't think there's any worry that I'm off to get my "rack winding certificate" :P

But yeah, out of context this debate would sound like complete nonsense... "crazy geeks find it difficult to decide between dust specks and extreme torture."

I do have to admit though, Andrew's comment about individual living 3^^^3 times and so on has me thinking again. If "keep memories and so on of all previous lives = yes" (so it's really one really long lifespan) and "permanent physical and psychological damage post torture = no") then I may take that. I think. Arrrgh, stop messing with my head. Actually, no, don't stop, this is fun! :)

Comment author: Mike7 01 November 2007 04:04:00AM 6 points [-]

I'd take it.
I find your choice/intuition completely baffling, and I would guess that far less than 1% of people would agree with you on this, for whatever that's worth (surely it's worth something.) I am a consequentialist and have studied consequentialist philosophy extensively (I would not call myself an expert), and you seem to be clinging to a very crude form of utilitarianism that has been abandoned by pretty much every utilitarian philosopher (not to mention those who reject utilitarianism!). In fact, your argument reads like a reductio ad absurdum of the point you are trying to make. To wit: if we think of things in equivalent, additive utility units, you get this result that torture is preferable. But that is absurd, and I think almost everyone would be able to appreciate the absurdity when faced with the 3^^^3 lives scenario. Even if you gave everyone a one week lecture on scope insensitivity.

So... I don't think I want you to be one of the people to initially program AI that might influence my life...

Comment author: michael_vassar 01 November 2007 05:11:00AM 2 points [-]

No Mike, your intuition for really large numbers is non-baffling, probably typical, but clearly wrong, as judged by another non-Utilitarian consequentialist (this item is clear even to egoists).

Personally I'd take the torture over the dust specks even if the number was just an ordinary incomprehensible number like say the number of biological humans who could live in artificial environments that could be built in one galaxy. (about 10^46th given a 100 year life span and a 300W (of terminal entropy dump into a 3K background from 300K, that's a large budget) energy budget for each of them). It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.

The only considerations that dull my certainty here is that I'm not convinced that my utility function can even encompass these sorts of ordinary incomprehensible numbers, but it seems to me that there is at least a one-in-a-billion chance that it can.

Comment author: Eliezer_Yudkowsky 01 November 2007 05:28:00AM 2 points [-]

So, if additive utility functions are naive, does that mean I can swap around your preferences at random like jerking around a puppet on a string, just by having a sealed box in the next galaxy over where I keep a googol individuals who are already being tortured for fifty years, or already getting dust specks in their eyes, or already being poked with a stick, etc., which your actions cannot possibly affect one way or the other?

It seems I can arbitrarily vary your "non-additive" utilities, and hence your priorities, simply by messing with the numbers of existing people having various experiences in a sealed box in a galaxy a googol light years away.

This seems remarkably reminiscent of E. T. Jaynes's experience with the "sophisticated" philosophers who sniffed that of course naive Bayesian probability theory had to be abandoned in the face of paradox #239; which paradox Jaynes would proceed to slice into confetti using "naive" Bayesian theory but with this time with rigorous math instead of the various mistakes the "sophisticated" philosophers had made.

There are reasons for preferring certain kinds of simplicity.

Comment author: Mike7 01 November 2007 07:08:00AM 5 points [-]

Michael Vassar:
Well, in the prior comment, I was coming at it as an egoist, as the example demands.
It's totally clear to me that a second of torture isn't a billion billion billion times worse than getting a dust speck in my eye, and that there are only about 1.5 billion seconds in a 50 year period. That leaves about a 10^10 : 1 preference for the torture.
I reject the notion that each (time,utility) event can be calculated in the way you suggest. Successive speck-type experiences for an individual (or 1,000 successive dust specks for 1,000,000 individuals) over the time period we are talking about would easily overtake 50 years of torture. It doesn't make sense to tally (total human disutility of torture (1 person/50 years in this case))*(some quantification of the disutility of a time unit of torture) vs. (total human speck disutility)*(some quantification of a unit of speck disutility).
The universe is made up of distinct beings (animals included), not the sum of utilities (which is just a useful contruct.)
All of this is to say:
If we are to choose for ourselves between these scenarios, I think it is incredibly bizarre to prefer 3^^^3 satisfying lives and one indescribably horrible life to 3^^^3 infinitesimally better lives than the alternative 3^^^3 lives. I think doing so ignores basic human psychology, from whence our preferences arise.

Comment author: mitchell_porter2 01 November 2007 10:31:00AM 0 points [-]

To continue this business of looking at the problem from different angles:

Another formulation, complementary to Andrew Macdonald's, would be: Should 3^^^3 people each volunteer to experience a speck in the eye, in order to save one person from fifty years of torture?

And with respect to utility functions: Another nonlinear way to aggregate individual disutilities x, y, z... is just to take the maximum, and to say that a situation is only as bad as the worst thing happening to any individual in that situation. This could be defended if one's assignment of utilities was based on intensity of experience, for example. There is no-one actually having a bad experience with 3^^^3 times the badness of a speck in the eye. As for the fact that two people suffering identically turns out to be no worse than just one - accepting a few counterintuitive conclusions is a small price to pay for simplicity, right?

Comment author: John_Mark_Rozendaal 01 November 2007 10:32:00AM 1 point [-]

I find it positively bizarre to see so much interest in the arithmetic here, as if knowing how many dust flecks go into a year of torture, just as one knows that sixteen ounces go into one pint, would inform the answer.

What happens to the debate if we absolutely know the equation:

3^^^3 dustflecks = 50 years of torture

or

3^^^3 dustflecks = 600 years of torture

or

3^^^3 dustfleck = 2 years of torture ?

Comment author: John_Mark_Rozendaal 01 November 2007 11:23:00AM 1 point [-]

The nation of Nod has a population of 3^^^3. By amazing coincidence, every person in the nation of Nod has $3^^^3 in the bank. (With a money suplly like that, those dollars are not worth much.) By yet another coincidence, the government needs to raise revenues of $3^^^3. (It is a very efficient government and doesn't need much money.) Should the money be raised by taking $1 from each person, or by simply taking the entire amount from one person?

Comment author: Recovering_irrationalist 01 November 2007 12:23:00PM 0 points [-]

I take $1 from each person. It's not the same dilemma.

----

Ri:The idea of convincing others to decide TORTURE is bothering me much more than my own decision.

PK:I don't think there's any worry that I'm off to get my "rack winding certificate" :P

Yes, I know. :-) I was just curious about the biases making me feel that way.

individual living 3^^^3 times...keep memories and so on of all previous lives

3^^^3 lives worth of memories? Even at one bit per life, that makes you far from human. Besides, you're likely to get tortured in googolplexes of those lifetimes anyway.

Arrrgh, stop messing with my head. Actually, no, don't stop, this is fun! :)

OK here goes... it's this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?

Comment author: Sebastian_Hagen2 01 November 2007 02:18:00PM 0 points [-]

Andrew Macdonald asked:
Any takers for the torture?
Assuming the torture-life is randomly chosen from the 3^^^3 sized pool, definitely torture. If I have a strong reason to expect the torture life to be found close to the beginning of the sequence, similar considerations as for the next answer apply.

Recovering irrationalist asks:
OK here goes... it's this life. Tonight, you start fifty years being loved at by countless sadistic Barney the Dinosaurs. Or, for all 3^^^3 lives you (at your present age) have to singalong to one of his songs. BARNEYLOVE or SONGS?
The answer depends on whether I expect to make it through the 50 year ordeal without permanent psychological damage. If I know with close to certainty that I will, the answer is BARNEYLOVE. Otherwise, it's SONGS; while I might still acquire irreversible psychological damage, it would probably take much longer, giving me a chance to live relatively sane for a long time before then.

Comment author: Zubon 01 November 2007 03:25:00PM 4 points [-]

Cooking something for two hours at 350 degrees isn't equivalent to cooking something at 700 degrees for one hour.

Caledonian has made a great analogy for the point that is being made on either side. May I over-work it?

They are not equivalent, but there is some length of time at 350 degrees that will burn as badly as 700 degrees. In 3^^^3 seconds, your lasagna will be ... okay, entropy will have consumed your lasagna by then, but it turns into a cloud of smoke at some point.

Correct me if I am wrong here, but I don't think there is any length of time at 75 or 100 degrees that will burn as badly as one hour at 700 degrees. It just will not cook at all. Your food will sit there and rot, rather than burning.

There must be some minimum temperature at which various things can burn. Given enough time at that temperature, it is the equivalent of just setting it on fire. Below that temperature, it is qualitatively different. You do not get bronze no matter how long you leave copper and tin at room temperature.

(Or maybe I am wrong there. Maybe a couple of molecules will move properly at room temperature over a few centures, so the whole mass becomes bronze in less than 3^^^3 seconds. I assume that anything physically possible will happen at some point in 3^^^3 seconds.)

Are there any SPECKS advocates who say we should pick two people tortured for 49.5 years rather than one for 50 years? If there is any degree of summation possible, 3^^^3 will get us there.

But, SPECKS can reply, there can be levels across with summation is not possible. If lasagna physically cannot burn at 75 degrees, even letting it "cook" for 33^^^^33 seconds, then it will never be as badly burned as one hour at 700 degrees.

"Did I say 75?" TORTURE replies. "I meant whatever the minimum possible is for lasagna to burn, plus 1/3^^3 degrees." SPECKS must grant victory in that case, but wins at 2/3^^3 degrees lower.

Which just returns the whole thing back to the primordial question-begging on either side, whether specks can ever sum to torture. If any number of beings needing to blink ever adds to 10 seconds of torture, TORTURE is in a very strong position, unless you are again arguing that 10 seconds of TORTURE is like 75 degrees, and there is some magic penny somewhere.

(Am I completely wrong? Aren't physics and chemistry full of magic pennies like escape velocities and temperatures needed for physical reactions?)

TORTURE must argue that yes, it is the sort of thing that adds. SPECKS must argue that it is like asking how many blades of grades you must add to get a battleship. "Mu."

Comment author: Eliezer_Yudkowsky 01 November 2007 03:37:00PM 4 points [-]

Zubon, we could formalize this with a tiered utility function (one not order-isomorphic to the reals, but containing several strata each order-isomorphic to the reals).

But then there is a magic penny, a single sharp divide where no matter how many googols of pieces you break it into, it is better to torture 3^^^3 people for 9.99 seconds than to torture one person for 10.01 seconds. There is a price for departing the simple utility function, and reasons to prefer certain kinds of simplicity. I'll admit you can't slice it down further than the essentially digital brain; at some point, neurons do or don't fire. This rules out divisions of genuine googolplexes, rather than simple billions of fine gradations. But if you admit a tiered utility function, it will sooner or later come down to one neuron firing.

And I'll bet that most Speckists disagree on which neuron firing is the magical one. So that for all their horror at us Unspeckists, they will be just as horrified at each other, when one of them claims that thirty seconds of waterboarding is better than 3^^^3 people poked with needles, and the other disagrees.

Comment author: Eliezer_Yudkowsky 01 November 2007 04:11:00PM 3 points [-]

...except that, if I'm right about the biases involved, the Speckists won't be horrified at each other.

If you trade off thirty seconds of waterboarding for one person against twenty seconds of waterboarding for two people, you're not visibly treading on a "sacred" value against a "mundane" value. It will rouse no moral indignation.

Indeed, if I'm right about the bias here, the Speckists will never be able to identify a discrete jump in utility across a single neuron firing, even though the transition from dust speck to torture can be broken up into a series of such jumps. There's no difference of a single neuron firing that leads to the feeling of a comparison between a sacred and an unsacred value. The feeling of sacredness, itself, is quantitative and comes upon you in gradual increments of neurons firing - even though it supposedly describes a utility cliff with a slope higher than 3^^^3.

The prohibition against torture is clearly very sacred, and a dust speck is clearly very unsacred, so there must be a cliff sharper than 3^^^3 between them. But the distinction between one dust speck and two dust specks doesn't seem to involve a comparison between a sacred and mundane value, and the distinction between 50 and 49.99 years of torture doesn't seem to involve a comparison between a sacred and a mundane value...

So we're left with cyclical prefrences. The one will trade 3 people suffering 49.99 years of torture for 1 person suffering 50 years of torture; after having previously traded 9 people suffering 49.98 years of torture for 3 people suffering 49.99 years of torture; and so on back to the starting point where it's better for 3^999999999 people to feel two dust specks than for 3^1000000000 people to feel one dust speck; right after, a moment before, having traded one person suffering 50 years of torture for 3^1000000000 people feeling one dust speck.

Comment author: Evan 02 November 2007 12:02:00AM 0 points [-]

Assuming that there are 3^^^3 distinct individuals in existence, I think the answer is pretty obvious- pick the torture. However, the fact that we cannot possibly hope to visualize so many individuals it's a pointlessly large number. In fact, I would go so low as one quadrillion human beings with dust specks in their eyes outweighs one individual's 50 years of torture. Consider- one quadrillion seconds of minute but noticeable pain versus a scant fifty years of tortured hell. One quadrillion seconds is about 31,709,792 years. Let's just go with 32 million years. Then factor in the magnitudes- torture is far worse than dust specks- 50 years versus 32 million good enough odds for you?

However, that being said, the question is yet another installment of lifeboat ethics, and has little bearing on the real world. If we are ever forced to make such a decision, that's one thing, but in the meantime let's work through systemic issues that might lead to such a situation instead.

Comment author: iwdw 03 November 2007 12:19:00AM 0 points [-]

My initial reaction (before I started to think...) was to pick the dust specks, given that my biases made the suffering caused by the dust specks morally equivalent to zero, and 0^^^3 is still 0.

However, given that the problem stated an actual physical phenomenon (dust specks), and not a hypothetical minimal annoyance, then you kind of have to take the other consequences of the sudden appearance of the dust specks under consideration, don't you?

If I was omnipotent, and I could make everyone on Earth get a dust speck in their eye right now, how many car accidents would occur? Heavy machinery accidents? Workplace accidents? Even if the chance is vanishingly small -- let's say 6 accidents occur on Earth because everyone got a dust speck in their eye. That's one in a billion.

That's one accident for every 10e9 people. Now, what percentage of those are fatal? Transport Canada currently lists the 23.7 of car accidents in 2003 as resulting in a fatality, which is 1 in 4. Let's be nice, and assume that everywhere else on earth safer, and take that down to 1 in 100 accidents being fatal.

Now, if everyone in existence gets a dust speck in their eye because of my decision, assuming the hypothetical 3^^^3 people live in something approximating the lifestyles on Earth, I've conceivably doomed 1 in 10e11 people to death.

That is, my cloud of dust specks have killed 3^^^3 / 10e11 people.

Comment author: rake 06 November 2007 06:11:00AM 0 points [-]

I have a question/answer in relation to this post that seems to be off-topic for the forum. Click on my name if interested.

Comment author: Kellopyy 22 November 2007 08:53:00PM -1 points [-]

Since I would not be one of the people affected I would not consider myself able to make that decision alone. In fact my preferences are irrelevant in that situation even if I consider situation to be obvious.

To have situation with 3^^^3 people we must have at least that many people capable of existing in some meaningful way. I assume we cannot query them about their preferences in any meaningful (omniscient) way. As I cannot choose who will be tortured or who gets dust specks I have to make collective decission.

I think that my solution would be to take three different groups of randomly chosen people. First group would be asked that question and given chance to discuss and change their minds. Second group would be asked would they save 3^^^3 people from dust specks by accepting torture. Third group would be asked would they agree to be dust specked giving person to be tortured 1/3^^^3 chance to be saved.

If one of the latter tests would show significant preference over one of the situations I would assume it is for some reason more acceptable given chance to choose. If it would seem that people are either willing to change scenario given chance in both situations or not willing to change situation in either scenario I would rely on their stated preference from first group and go by that.

I do not think this solution is good enough.

Comment author: Chris7 30 November 2007 02:33:00PM 3 points [-]

Evolution seems to have favoured the capacity for empathy (the specks choice) over the capacity for utility calculation, even though utility calculation would have been a 'no brainer' for the brain capacity we have.
The whole concept reminds me of the Turing test. Turing, as a mathematician, just seems to have completely failed to understand that we don't assign rationality, or sentience, to another object by deduction. We do it by analogy.

Comment author: Jeffrey_Herrlich 04 February 2008 11:21:00PM 0 points [-]

I know that this is only a hypothetical example, but I must admit that I'm fairly shocked at the number of people indicating that they would select the torture option (as long as it wasn't them being tortured). We should be wary of the temptation to support something unorthodox for the effect of: "Hey, look at what a hardcore rationalist I can be." Real decisions have real effects on real people.

Comment author: g 05 February 2008 01:24:00AM 3 points [-]

And we should be wary to select something orthodox for fear of provoking shock and outrage. Do you have any reason to believe that the people who say they prefer TORTURE to SPECKS are motivated by the desire to prove their rationalist credentials, or that they don't appreciate that their decisions have real consequences?

Comment author: Unknown3 05 February 2008 04:24:00AM 2 points [-]

Jeffrey, on one of the other threads, I volunteered to be the one tortured to save the others from the specks.

As for "Real decisions have real effects on real people," that's absolutely correct, and that's the reason to prefer the torture. The utility function implied by preferring the specks would also prefer lowering all the speed limits in the world in order to save lives, and ultimately would ban the use of cars. It would promote raising taxes by a small amount in order to reduce the amount of violent crime (including crimes involving torture of real people), and ultimately would promote raising taxes on everyone until everyone could barely survive on what remains.

Yes, real decisions have real effects on real people. That's why it's necessary to consider the total effect, not merely the effect on each person considered as an isolated individual, as those who favor the specks are doing.

Comment author: Eliezer_Yudkowsky 05 February 2008 05:22:00AM 2 points [-]

Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture.

Comment author: Jeffrey_Herrlich 05 February 2008 09:41:00PM 4 points [-]

"Following your heart and not your head - refusing to multiply - has also wrought plenty of havoc on the world, historically speaking. It's a questionable assertion (to say the least) that condoning irrationality has less damaging side effects than condoning torture."

I'm not really convinced that multiplication of the dust-speck effect is relevant. Subjective experience is restricted to individuals, not collectives. To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B?

If the goal is to be a utilitarian ethicist with the well-being of humanity as your highest priority; then something *may* be wrong with your model when the vast majority of humans would choose the option that you wouldn't. (As I suspect they would). Utility isn't all that matters to most people. Is utilitarianism the only "real" ethics?

My criticisms can sometimes come across the wrong way. (And I know that you actually *do* care about humanity, Eli.) I don't mean to judge here, just strongly disagree. Not that I retract what I wrote; I don't.

Comment author: g 05 February 2008 10:33:00PM 1 point [-]

Jeffrey wrote: To me, this specific exercise reduces to a simpler question: Would it be better (more ethical) to torture individual A for 50 years, or inflict a dust speck on individual B? Gosh. The only justification I can see for that equivalence would be some general belief that badness is simply independent of numbers. Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?

In any case: since you now appear to be conceding that it's possible for someone to prefer TORTURE to SPECKS for reasons other than a childish desire to shock, are you retracting your original accusation and analysis of motives? ... Oh, wait, I see you've explicitly said you aren't. So, you know that one leading proponent of the TORTURE option actually *does* care about humanity; you agree (if I've understood you right) that utilitarian analysis can lead to the conclusion that TORTURE is the less-bad option; I assume you agree that reasonable people can be utilitarians; you've seen that one person explicitly said s/he'd be willing to be the one tortured; but in spite of all this, you don't retract your characterization of that view as shocking; you don't retract your implication that people who expressed a preference for TORTURE did so because they want to show how uncompromisingly rationalist they are; you don't retract your implication that those people don't appreciate that real decisions have real effects on real people. I find that ... well, "fairly shocking", actually.

(It shouldn't matter, but: I was not one of those advocating TORTURE, nor one of those opposing it. If you care, you can find my opinions above.)

Comment author: Unknown3 06 February 2008 06:21:00AM 0 points [-]

Jeffrey, do you really think serial killing is no worse than murdering a single individual, since "Subjective experience is restricted to individuals"?

In fact, if you kill someone fast enough, he may not subjectively experience it at all. In that case, is it no worse than a dust speck?

Comment author: Jeffrey_Herrlich 06 February 2008 07:01:00PM 0 points [-]

"Suppose the question were: Which is better, for one person to be tortured for 50 years or for everyone on earth to be tortured for 49 years? Would you really choose the latter? Would you not, in fact, jump at the chance to be the single person for 50 years if that were the only way to get that outcome rather than the other one?"

My criticism was for this specific initial example, which yes did seem "obvious" to me. Very few, *if any*, ethical opinions can be generalized over *any* situation and still seem reasonable. At least by my definition of "reasonable".

Notice that I didn't single anyone out as being "bad". Morality is subjective and I don't dispute that. "Every man is right by his own mind". I cautioned that we shouldn't allow a desire to stand-out factor into a decision such as this. I know well that theatrics isn't an uncommon element on mailing lists/blogs. This example shocked me because toy decisions can become real decisions. I have a hunch that I wouldn't be the only person shocked by this. If this specific example were put before all of humanity, I imagine that the people who *were not* shocked by it, would be the minority. I don't think that I'm being unreasonable.

Comment author: Jeffrey_Herrlich 07 February 2008 07:01:00PM -1 points [-]

I can see myself spending too much time here, so I'm going to finish-up and ya'll can have the last word. I'll admit that it's possible that one or more of you actually would sacrifice yourself to save others from a dust speck. Needless to say, I think it would be a huge mistake on your part. I definitely wouldn't want you to do it on my behalf, if for nothing more than selfish reasons: I *don't* want it weighing on my conscience. Hopefully this is a moot point anyway, since it should be possible to avoid both unwanted dust specks and unwanted torture (eg. via a Friendly AI). We should hope that torture dies-away with the other tragedies of our past, and isn't perpetuated into our not-yet-tarnished future.

Comment author: Bogdan_Butnaru 22 October 2008 01:26:00PM 0 points [-]

I know you're all getting a bit bored, but I'm curious what you think about a different scenario:

What if you have to choose between (a) for the next 3^^^3 days, _you_ get an extra speck in your eye per day than normally, and 50 years you're placed in stasis, or (b) _you_ get the normal amount of specks in your eyes, but during the next 3^^^3 days you'll pass through 50 years of atrocious torture.

Everything else is considered equal in the other cases, including the fact that (i) your total lifespan will be the same in both cases (more than 3^^^3 days), (ii) the specks are guaranteed to not cause any physical effects other than those mentioned in the original post (i.e., you're minimally annoyed and blink once more each day; there are no "tricks" about hidden consequences of specks), (iii) any other occurrence of specks in the eye (yours or others') or torture (you or others) will happen exactly the same for either choice, (iv) the 50 years of either stasis or torture would happen at the same points and (v) after the end of the 3^^^3 days the state of the world is exactly the same except for you (e.g., the genie doesn't come back with something tricky).

Also assume that the 3^^^3 days you are human-shaped and human-minded, except for the change that your memory (and ability to use it) is stretched to work over the duration as a typical human's does during a typical life.

Does your answer change if either:
A) it's guaranteed that _everything_ else is perfectly equal (e.g., the two possible cases will magically be forbidden to interfere with any of your decisions during the 3^^^3 days, but afterwards you'll remember them; in the case of torture, any remaining trauma will remain until healed "physically". More succinctly, there are _no_ side effects during the 3^^^3 days, and none other than the "normal" ones afterwards).
B) the 50 years of torture happen at the start, end, or distributed throughout the period.
C) we replace the life period with either (i) your entire lifespan or (ii) infinity, and/or the period of torture with (i) any constant length larger than one year or (ii) any constant fraction of the lifespan discussed.
D) you are magically justified to put absolute certain trust in the offer (i.e., you're sure the genie isn't tricking you).
E) replace "speck in the eye" by "one hair on your body grows by half the normal amount" for each day.

Of course, you don't have to address every variation mentioned, just those that you think relevant.

Comment author: Bogdan_Butnaru 22 October 2008 01:40:00PM 0 points [-]

OK, I see I got a bit long-winded. The interesting part of my question is if you'd take the same decision if it's about you instead of others. The answer is obvious, of course ;-)

The other details/versions I mentioned are only intended to explore the "contour of the value space"* of the other posters. (*: I'm sure Eliezer has a term for this, but I forget it.)

Comment author: Benja_Fallenstein 22 October 2008 03:43:00PM 1 point [-]

Bogdan's presented almost exactly the argument that I too came up with while reading this thread. I would choose the specks in that argument and also in the original scenario (as long as I am not committing to the same choice being repeated an arbitrary number of times, and I am not causing more people to crash their cars than I cause not to crash their cars; the latter seems like an unlikely assumption, but thought experiments are allowed to make unlikely assumptions, and I'm interested in the moral question posed when we accept the assumption). Based on the comments above, I expect that Eliezer is perfectly consistent and would choose torture, though (as in the scenario with 3^^^3 repeated lives).

Eliezer and Marcello do seem to be correct in that, in order to be consistent, I would have to choose a cut-off point such that n dust specks in 3^^^3 eyes would be less bad than one torture, but n+1 dust specks would be worse. I agree that it seems counterintuitive that adding just one speck could make the situation "infinitely" worse, especially since the speckists won't be able to agree exactly where the cut-off point is.

But it's only the infinity that's unique to speckism. Suppose that you had to choose between inflicting one minute of torture on one person, or putting n dust specks into that person's eye over the next fifty years. If you're a consistent expected utility altruist, there must be some n such that you would choose n specks, but not n+1 specks. What makes the n+1st speck different? Nothing, it just happens to be the cut-off point you must choose if you don't want to choose 10^57 specks over torture, nor torture over zero specks. If you make ten altruists consider the question independently, will they arrive at exactly the same value of n? Prolly not.

The above argument does not destroy my faith in decision theory, so it doesn't destroy my provisional acceptance of speckism, either.

Comment author: retired_urologist 22 October 2008 03:52:00PM 0 points [-]

I came across this post only today, because of the current comment in the "recent comments" column. Clearly, it was an exercise that drew an unusual amount of response. It further reinforces

my impression of much of the OB blog, posted in August, and denied by email.

Comment author: Tim7 20 February 2009 10:58:00PM 0 points [-]

I think you should ask everyone until you have at least 3^^^3 people whether they would consent to having a dust speck fly into their eye to save someone from torture. When you have enough people just put dust specks into their eyes and save the others.

Comment author: homunq 21 February 2009 01:08:00AM 1 point [-]

The question is, of course, silly. It is perfectly rational to decline to answer. I choose to try to answer.

It is also perfectly rational to say "it depends". If you really think "a dust speck in 3^^^3 eyes" gives a uniquely defined probability distribution over different subsets of possibilityverse, you are being ridiculous. But let's pretend it did - let's pretend we had 3^^^^3 parallel Eleizers, standing on flat golden surfaces in 1G and one atmosphere, for just long enough to ask each other enough enough questions to define the problem properly. (I'm sorry, Eleizer, if by stating that possibility, I've increased the "true"ness of that part of the probabilityverse by ((3^^^3+1)/3^^^3) :) ).

You can or "I've thought about it, but I don't trust my thought processes". That is not my position.

My position is that this question does not, in fact, have an answer. I think that that fact is very important.

It's not that the numbers are meaningless. 3^^^3 is a very exact number, and you can prove any number of things about it. A different question using ridiculous numbers - say, would you rather torture 4^^^4 people for 5 minutes or 3^^^3 of them for 50 years - has a single correct answer which is very clear (of course, the 3^^^3 ones; 4^^^4 >>> (3^^^3)^2). (Unless there were very bizarre extra conditions on the problem.)

It's just that there is no universal moral utility function which inputs a probability distribution over a finite subset of the possibilityverse and outputs a number. It's more like relativistic causality - substitute "better" for "after". A is after B and B is a spacelike distance from C, but C can also be spacelike from A. The dust specks and the torture are incomparable, a spacelike distance.

I think that, philosophically, that makes a big difference. If you pilosophically can't always go around morally comparing near-infinite sets, then it's silly to try to approximate how you would behave if you could. Which means you consider the moral value of the consequences which you could possibly anticipate. So yeah, if you are working on AI, you are morally obligated to think about FAI, because that's intentional action, and you would have to be a monster to say you didn't care. But you don't get to use FAI and the singularity to trump the here-and-now, because in many ways they're just not comparable.

Which means, to me, for instance, that people can understand the singularity idea and believe it has a non-0 probability, and have abilities or resources that would be meaningful to the FAI effort, and still morally choose to simply live as "good people" in a more traditional sense (have a good life in which they make the people with whom they interact overall happier). It's not just a lack of ability to trace the consequences; it's also the possibility that the consequences of this or that outcome will be literally incomparable by any finite halting algorithm, whereas even our desperately-limited brains have decent approximations of algorithms for morally comparing the effect of, say, posting on OB versus washing the dishes.

Going to wash the dishes now.

Comment author: homunq 21 February 2009 01:37:00AM 0 points [-]

Tim: You're right - if you are a reasonably attractive and charismatic person. Otherwise, the question (from both sides) is worse than the dust speck.

(Asking people also puts you in the picture. You must like to spend eternity asking people a silly question, and learning all possible linguistic vocalizations in order to do so. There are many fewer vocalizations than possible languages, and many fewer possible human languages than 3^^^3. You will be spending more time going from one person of the SAME language to another, at 1 femtosecond per journey, than you would spend learning all possible human languages. That would be true even if the people were fully shuffled by language - just 1 femtosecond each for all the times when coincidence gives you two of the same language in a row. 3^^^3 is that big.)

Comment author: HughRistik 11 September 2009 12:36:05AM *  1 point [-]

Torture is not the obvious answer, because torture-based suffering and dust-speck-based suffering are not scalar quantities with the same units.

To be able to make a comparison between two quantities, the units must be the same. That's why we can say that 3 people suffering torture for 49.99 years is worse than 1 person suffering torture for 50 years. Intensity * Duration * Number of People gives us units of PainIntensity-Person-Years, or something like that.

Yet torture-based suffering and dust-speck-based suffering are not measured in the same units. Consequently, we cannot solve this question as a simple math problem. For example, the correct units of torture-based suffering might involve Sanity-Destroying-Pain. There is no reason to believe that we can quantitatively compare Easily-Recoverable-Pain to Sanity-Destroying-Pain; at least, the comparison is not just a math problem.

To be able to do the math, we would have to convert both types of suffering to the same units of disutility. Some folks here seem to think that no matter what the conversion functions are, 3^^^3 is just so big that the converted disutility of 3^^3 dust specs is greater than the converted disutility of 50 years of torture for one person. But determination of the correct disutility conversion functions is itself a philosophical problem that cannot be waved away, and it's impossible to evaluate that claim until those conversion functions have at least been hinted at.

One way to get different types of suffering to have the same units would be to represent them as vectors, and find a way to get the magnitude of those vectors.

The torture position seems to do the math by using pain intensity as a scalar. Yet there is no reason to to believe that suffering is a scalar quantity, or that the disutility accorded to suffering is a scalar quantity. Even pain intensity is case where "quantity has a quality all of its own": as you increase it, the suffering goes through qualitative changes. For example, if just a 10% increase in pain duration/intensity causes Post-Traumatic Stress Disorder, that pain is more than 10% worse, and it's because a qualitatively different type of suffering. The units change.

Suffering may well be better represented as a vector. Other dimensions in the vector might include variables such as chance of Post-Traumatic Stress Disorder (0 in the case of dust specks which are uncomfortable but not traumatic, and approaching 100% in the case of torture), non-recovery chance (0% in the case of dust specks, approaching 100% in the case of torture), recovery time (<1 second in the case of dust specks, approaching infinity in the case of 50 years of torture), insanity, human rights violation, career-destruction, mental-health destruction, life destruction...

Choice of pain intensity only over other variables relevant to suffering is begging the question. We could cherry-pick another dimension out of the vector to get a different result, such as life destruction. LifeDestructionChance(50YearsOfTorture) could be greater than LifeDestructionChance(DustSpeck) * 3^^^3 (I might be committing scope insensitivity saying this, but the point is that the answer isn't self-evident). Of course, life destruction isn't the only relevant variable to the calculation of suffering, but neither is pain intensity.

Now, if there is a way to take the magnitude of a suffering vector (another philosophical problem), it's not at all self-evident that Magnitude( SpeckVector ) * 3^^^3 > Magnitude( 50YearsOfTortureVector), because the SpeckVector has virtually all its dimensions approaching 0 while the TortureVector has many dimensions approaching infinity or their max value (which I think reflects why people think torture is so bad). That would depend on what the dimensions of those vectors are and how the magnitude function works.

Comment author: Cyan 11 September 2009 01:54:59AM *  5 points [-]

But determination of the correct disutility conversion functions is itself a philosophical problem that cannot be waved away, and it's impossible to evaluate that claim until those conversion functions have at least been hinted at.

You seem to have gotten hung up on 3^^^3, which is really just a placeholder for "some finite number so large it boggles the mind". If you accept that all types of pain can be measured on a common disutility scale, then all you need is a non-zero conversion factor, and the repugnant conclusion follows (for some mind-bogglingly large number of specks). I think that if a line of argument that rescues your rebuttal exists, it involves lexicographic preferences.

Comment author: Bugle 12 September 2009 01:05:05PM -1 points [-]

There is a false choice being offered, because every person in every lifetime is going to experience getting something in their eye, I get a bug flying into my eye on a regular basis whenever I go running (3 of them the last time!) and it'll probably have happened thousands of times to me at the end of my life. It's pretty much a certainty of human experience (Although I suppose it's statistically possible for some people to go through life without ever getting anything in their eyes).

Is the choice being offered to make all humanities eyes for all eternity immune to small inconveniences such as bugs, dust or eyelashes? Otherwise we really aren't being offered anything at all.

Comment author: Bugle 12 September 2009 06:00:53PM 0 points [-]

Although if we factor in consequences, say... being distracted by a dust speck in the eye while driving or doing any other such critical activity then statistically those trillions of dust specks have the potential to cause untold amounts of damage and suffering

Comment author: Nubulous 12 September 2009 10:45:16PM 3 points [-]

Doesn't "harm", to a consequentialist, consist of every circumstance in which things could be better, but aren't ? If a speck in the eye counts, then why not, for example, being insufficiently entertained ?

If you accept consequentialism, isn't it morally right to torture someone to death so long as enough people find it funny ?

Comment author: Alicorn 12 September 2009 11:02:38PM 4 points [-]

I'm picking on this comment because it prompted this thought, but really, this is a pervasive problem: consequentialism is a gigantic family of theories, not just one. They are all still wrong, but for any single counterexample, such as "it's okay to torture people if lots of people would be thereby amused", there is generally at least one theory or subfamily of theories that have that counterexample covered.

Comment author: PowerSet 13 September 2009 07:56:44AM *  1 point [-]

Isn't it paradoxical to argue against consequentialism based on its consequences?

The reason you can't torture people is that those members of your population who aren't as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.

Comment author: Alicorn 13 September 2009 12:51:54PM 4 points [-]

I typically argue against consequentialism based on appeals to intuition and its implications, which are only "consequences" in the sense used by consequentialism if you do some fancy equivocating.

The reason you can't torture people is that those members of your population who aren't as dumb as bricks will realize that the same could happen to them. Such anxiety among the more intelligent members of your society should outweigh the fun experienced by the more easily amused.

Pfft. It is trivially easy to come up with thought experiments where this isn't the case. You can increase the ratio of bricks-to-brights until doing the arithmetic leads to the result that you should go ahead and torture folks. You can choose folks to torture on the basis of well-publicized, uncommon criteria, so that the vast majority of people rightly expect it won't happen to them or anyone they care about. You can outright lie to the population, and say that the people you torture are all volunteers (possibly even masochists who are secretly enjoying themselves) contributing to the entertainment of society for altruistic reasons. Heck, after you've tortured them for a while, you can probably get them to deliver speeches about how thrilled they are to be making this sacrifice for the common morale, on the promise that you'll kill them quicker if they make it convincing.

All that having been said, there are consequentialist theories that do not oblige or permit the torture of some people to amuse the others. Among them are things like side-constraints rights-based consequentialisms, certain judicious applications of deferred-hedon/dolor consequentialisms, and negative utilitarianism (depending on how the entertainment of the larger population cashes out in the math).

Comment author: R_Nebblesworth 13 September 2009 05:34:52AM 0 points [-]

It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence.

We don't know anything about (1) the side-effects of picking SPECKS (such as car crashes); and definitely don't know that (2) the torture victim can "acclimate". (2) in particular seems like cheating in a big way - especially given the statement "without hope or rest".

There's nothing rational about posing a hypothetical and then adding in additional facts in your answer. However, that's a great way to avoid the question presented.

Comment author: R_Nebblesworth 14 September 2009 12:14:54AM 0 points [-]

I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... I suppose I'm just repeating the above points on lexicographic preferences.

Any answer to the question involves making value choices about the relative harms associated with torture and specks, I can't see how there's an "obvious" answer at all, unless one is arrogant enough to assume their value choices are universal and beyond challenge.

Unless you add facts and assumptions not stated, the question compares torture x 50 years to 1 dust speck in an infinite number people's eyes, one time. Am I missing something? Because it seems It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational.

Comment author: CarlShulman 14 September 2009 12:39:00AM 7 points [-]

"I've received minus 2 points (that's bad I guess?) with no replies, which is very illuminating... "

I think this is mainly because your comment seemed uninformed by the relevant background but was presented with a condescending and negative tone. Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses.

"It seems that many, including Yudkowsky, answer this question by making the most basic mistake, i.e. by cheating - assuming facts not in evidence."

http://lesswrong.com/lw/2k/the_least_convenient_possible_world/

"Any answer to the question involves making value choices"

Yes it does.

"compares torture x 50 years to 1 dust speck in an infinite number people's eyes"

3^^^3 is a (very large) finite number.

"It can't be answered without reference to value choices - which to anyone who doesn't share those values will naturally appear irrational."

Moral anti-realists don't have to view differences in values as reflecting irrationality.

Comment author: R_Nebblesworth 14 September 2009 12:46:28AM 1 point [-]

Fair enough, apologies for the tone.

But if answering the question involves making arbitrary value choices I don't understand how there can possibly be an obvious answer.

Comment author: CarlShulman 14 September 2009 12:52:57AM 2 points [-]

There isn't for agents in general, but most humans will in fact trade off probabilities of big bads (death, torture, etc) against minor harms, and so preferring SPECKS indicates a seeming incoherency of values.

Comment author: R_Nebblesworth 14 September 2009 12:57:31AM 2 points [-]

Thanks for the patient explanation.

Comment author: thomblake 14 September 2009 02:49:10PM 0 points [-]

Comments with both these characteristics tend to get downvoted, but if you cut back on one or the other you should get better responses.

I'd just like to note that comments informed by the relevant background but condescending and negative are often voted down as well. Though Annoyance seems to have relatively high karma anyway.

Comment author: CarlShulman 14 September 2009 02:52:56PM *  0 points [-]

I considered that, which is why I said that the responses would be "better."

Comment author: bogus 14 September 2009 02:53:26PM 0 points [-]

I'd just like to note that comments informed by the relevant background but condescending and negative are often voted down as well.

I agree. See DS3618 for a crystal-clear example.

Comment author: thomblake 14 September 2009 03:10:13PM 1 point [-]

I strongly doubt that person counts as "informed by the relevant background".

Comment author: CarlShulman 14 September 2009 04:39:49PM *  1 point [-]

I don't think that case is crystal-clear, could you explain this a bit more?

Looking at DS3618's comments, he (I estimate gender based on writing style and the demographics of this forum and of the CMU PhD program he claims to have entered) had some good (although obvious) points regarding peer-review and Flare. Those comments were upvoted.

The comments that were downvoted seem to have been very negative and low in informed content.

He claimed that calling intelligent design creationism "creationism" was "wrong" because ID is logically separable from young earth creationism and incorporates the idea of 'irreducible complexity.' However, arguments from design, including forms of 'irreducible complexity' argument, have been creationist standbys for centuries. Rudely chewing someone out for not defining creationism in a particular narrow fashion, the fashion advanced by the Discovery Institute as part of an organized campaign to evade court rulings, does deserve downvoting. Suggesting that the Discovery Institute, including Behe, isn't a Christian front group is also pretty indefensible given the public info on it (e.g. the "wedge strategy" and numerous similar statements by DI members to Christian audiences that they are a two-faced organization).

This comment implicitly demanded that no one note limitations of the brain without first building AGI, and was lacking in content.

DS3618 also claims to have a stratospheric IQ, but makes numerous spelling and grammatical errors. Perhaps he is not a native English speaker, but this does shift probability mass to the hypothesis that he is a troll or sock puppet.

He says that he entered the CMU PhD program without a bachelor's degree based on industry experience. This is possible, as CMU's PhD program has no formal admissions requirements according to its document. However, given base rates, and the context of the claim, it is suspiciously convenient and shifts further probability mass towards the troll hypothesis. I suppose one could go through the CMU Computer Science PhD student directory to find someone without a B.S. and with his stated work background to confirm his identity (only reporting whether there is such a person, not making the anonymous DS3618's identity public without his consent).

Comment author: shibl 16 October 2009 06:42:59AM 2 points [-]

The obvious answer is that torture is preferable.

If you have to pick yourself a chance of 1/3^^^3 of 50 years torture vs the dust spec you will pick the torture.

We actually do this every day: we eat foods that can poison us rather than be hungry, we cross the road rather than stay at home, etc.

Imagine there is a safety improvement to your car that will cost 0.0001 cent but will save you from an event that will happen once in 1000 universe lifetimes would you pay for it?

Comment author: thomblake 16 October 2009 12:45:12PM 7 points [-]

I don't think it's very controversial that TORTURE is the right choice if you're maximizing overall net utility (or in your example, maximizing expected utility). But some of us would still choose SPECKS.

Comment author: ABranco 25 October 2009 03:28:53AM *  11 points [-]

Very-Related Question: Typical homeopathic dilutions are 10^(-60). On average, this would require giving two billion doses per second to six billion people for 4 billion years to deliver a single molecule of the original material to any patient.

Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?

If anyone convinces me that yes, I might accept to be a Torturer. Otherwise, I assume that the negligibility of the speck, plus people's resilience, would make no lasting effects. Disutility would vanish in miliseconds. If they wouldn't even notice or have memory of the specks after a while, it'd equate to zero disutility.

It's not that I can't do the maths. It's that the evil of the speck seems too diluted to do harm.

Just like homeopathy is too diluted to do good.

Comment author: RobinZ 25 October 2009 03:39:34AM 2 points [-]

Could one argue that if we administer a homeopathic pill of vitamin C in the above dilution to every living person for the next 3^^^3 generations, the impact would be a humongous amount of flu-elimination?

Easily. 3^^^3 = 3^^27 = 3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3^3 is so much larger than 10^60 that it is almost certain that many people will receive significant doses of vitamin C. Heck, 3^3^3^3^3^3 ~= 8.719e115 >> 10^60, and that's merely 3^^6. If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance (which I believe you imply for the purposes of the question), then a tremendous number of people will be protected from the flu -- much, much in excess of 3^^26.

Comment author: ABranco 25 October 2009 04:21:58AM 1 point [-]

almost certain that many people will receive significant doses of vitamin C

Not what I said.

Each person will receive vitamin C diluted in the ratio of 10^(-60) (see reference here). The amount is the same for everyone, constant. Strictly one dose per person (as it was one speck per person).

But the number of persons are all people alive in the next 3^^^3 generations.

If there is any causal relationship at all between receiving a dose of vitamin C and flu resistance

...which wouldn't mean it is linear at all. Above a certain dose can be lethal; below, can have no effect.


Does it sound reasonable that if you eat one nanogram of bread during severe starvation, it would retard your death in precisely zero seconds?