Today's post, Torture vs. Dust Specks was originally published on 30 October 2007. A summary (taken from the LW wiki):

 

If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Motivated Stopping and Motivated Continuation, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

New Comment
85 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]Jack150

I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don't have instructions to deal with such cases by assigning a dust spec a value of -1 util and torture a very low but > -3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way -- because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my 'utility function' is buggy but my 'utility function' says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.

The claim that the answer is "obvious" is troubling.

-6[anonymous]

Here's a good way of looking at the problem.

Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.

Suppose I find someone, Bob, with the same baseline happiness as the girl we're suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.

To put numbers on this:

Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, ... Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, ... Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, ... Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, ... Zack has 999,999,999,999, Alice has 1,000,000,000,000.

It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.

Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it's like torturing two people. Add in another trillion, and it's worse still. You get the idea.

3[anonymous]
... If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don't believe there's any reason to expect there to be a distribution of pain in the way that you describe - or in any case it seems like Yudkowsky's point should generalise, and I'm not sure that it does. If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former. I may have increased the number of pain units in existence, but my value computation doesn't work by adding up "pain units". I'm almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven't reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him. I don't know where the threshold(s) are, but I'm sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else's pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that "pain units" are additive.
0Multipartite
Is the distribution necessary (other than as a thought experiment)? Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant? ...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically... hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3). ...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse... (Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?) ---------------------------------------- The arbitrary points concept, rather than a smooth gradie
0DanielLC
Why would they all be at pain number zero? I'd expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there'd be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion. If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it's at least as bad to torture the one person as to give the rest dust specks. Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks? I would expect a cutoff like this would be an approximation. You'd actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it's unimaginable.
1[anonymous]
It's a thought experiment. The whole scenario is utterly far-fetched, so there's no use in arguing that this or that detail of the thought experiment is what we should "expect" to find. As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units - i.e. 3^^^3 x miniscule pain > 1 x torture - in our moral calculations. EDIT: in response to the rest of your comment, see my reply to "Unnamed".
0MinibearRex
To get Eliezer's point, make the world more inconvenient. 3^^^3 people all with equivalent pain tolerances to you getting dust specks in their eyes, or torture one person for 50 years.
0Xece
I believe the problem with this, is that you have given actual values (pain units), and equated the two levels of "torture" outlined in the original thought experiment. Specifically, equating one trillion humans with dust speck in eye and Alice being tortured.
1DanielLC
So, what's the problem? Is a dust speck incomparable to torture? A dust speck is comparable to something slightly worse than a dust speck, which is comparable to something slightly worse than that, etc. At some point, you'll compare dust specks to torture. You may not live long enough to follow that out explicitly, just like you could never start with one grain of sand and keep adding them one at a time to get a beach, but the comparison still exists.
2Shmi
No comparison exists if, as I mentioned in my other post, the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect. One blink, and the whole thing is forgotten forever, quickly replaced by an itch in your bum, flickering fluorescent light overhead, your roommate coughing loudly, or an annoying comment on LW.
3DanielLC
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole. You notice it while it's happening. You forget about it eventually, but even if you were tortured for 3^^^3 years before finally dying, you'd forget it all the moment you die.
3Shmi
I consider it a faulty analogy. Here is one I like better: if the said speck of dust disintegrates into nothing after an instant, there is no bigger beach and no black hole.
6ArisKatsaris
If you consider the disutility of the dust speck zero, because the brief annoyance will be forgotten, then can the disutility of the torture also be made into zero, if we merely add the stipulation that the tortured person will then have the memory of this torture completely erased and the state of their mind reverted to what it had been before the torture?
1Shmi
This is an interesting question, but it seems to be in a different realm. For example, it could be reformulated as follows: is this 50-year torture option that bad if it is parceled into 1 second chunks and any memory of each one is erased immediately, and it has no lasting side effects. For the purpose of this discussion, I assume that it is 50 dismal years with all the memories associated and accumulated all the way through and thereafter. In that sense it is qualitatively in a different category than a dust speck. This might not be yours (or EY's) interpretation.
2see
6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet. Is the problem that we're unable to do math? You can't possibly say one molecule of silicon dioxide is incomparable to 6 × 10^30 kilograms of sand, can you? They're indisputably the same substance, after all; 6 × 10^55 molecules of SiO2 is 6 × 10^30 kilograms of sand. Even if you make the disutility nonlinear, you have to do something really, really extreme to overcome 3^^^3 . . . and of you do that, why, let's substitute in 3^^^^3 or 3^^^^^3 instead. Is the problem that we are failing to evaluate what happens if everybody else makes the same decision? If 6 × 10^55 people were given the decision and they all chose the molecule, 3^^^3 inhabited planets are converted into black holes, while if they made the other only 6 × 10^55 planets would be. So when faced with an option that seems to cause no disutility, must we annihilate seven billion people because it would if enough other people made our decision it would be far worse than if we and all of them made the other?
0DanielLC
My point wasn't so much that it will cause a black hole, as that a tiny amount of disutility times 3^^^3 is going to be unimaginably horrible, regardless of how small 3^^^3. That's not the problem at all. Thinking about that is a good sanity check.If it's good to make that decision once it's better to make it 10^30 times. However, it's only a sanity check. Everybody isn't going to make the same decision as you, so there's no reason to assume they will.
0[anonymous]
Analogy does not fit. Dust specks have an approximately known small negative utility. The benefit or detriment of adding sand to the beaches is not specified one way or the other. If it was specified then I'd be able to tell you whether it sounds better or worse than destroying a planet.
-2Incorrect
The original thought experiment is used to provide a pure example of quantifying and comparing arbitrary levels of suffering as a test to see whether we support such a type of utilitarian consequentialism. By comparing torture to torture, you are changing the scenario to test a slightly weaker version of the original type of utilitarian consequentialism where you do quantify and compare arbitrary changes to absolute levels of suffering with arbitrary absolute levels of suffering but not necessarily allowing the two instances of absolute levels of suffering to be arbitrary with respect to each other. If anyone could rewrite this comment to be comprehensible I would appreciate it.

Another way to reach the conclusion that dust specks are worse is by transitivity. Consider something that is slightly worse than getting a dust speck in your eye. For instance, maybe hearing the annoying sound of static on television is just a bit worse, as long as it's relatively brief and low volume. Now,

1a. Which is worse: everyone on Earth gets a dust speck in their eye, or one person hears a second of the annoying sound of static on a television with the volume set at a fairly low level [presumably you think that the dust specks are worse]
1b. Whic... (read more)

2[anonymous]
When I first read Eliezer's post on this subject, I was confused by this transitivity argument. It seems reasonable. But even at that point, I questioned the idea that if all of the steps as you outline them seem individually reasonable, but torture instead of dust specks seems unreasonable, it is "obvious" that I should privilege the former output of my value computation over the latter. My position now is that in fact, thinking carefully about the steps of gradually increasing pain, there will be at least one that I object to (but it's easy to miss because the step isn't actually written down). There is a degree of pain that I experience that is tolerable. Ouch! That's painful. There is an infinitesimally greater degree of pain (although the precise point at which this occurs, in terms of physical causes, depends on my mood or brain state at that particular time) that is just too much. Curses to this pain! I cannot bear this pain! This seems like a reasonable candidate for the step at which I stop you and say no, actually I would prefer any number of people to experience the former pain, rather than one having to bear the latter - that difference just barely exceeded my basic tolerance for pain. Of course we are talking about the same subjective level of pain in different people - not necessarily caused by the same severity of physical incident. This doesn't seem ideal. However, it is more compatible with my value computation than the idea of torturing someone for the sake of 3^^^3 people with dust specks in their eyes.
1Multipartite
I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination. 'prefer any number of people to experience the former pain, rather than one having to bear the latter': applying to across time as well as across numbers, one can reach the state of comparing {one person suffering brief unbearable pain} to {a world of pain, every person constantly existing just at the theshold at which it's possible to not go insane}. Somewhat selfishly casting oneself in the position of potential sufferer and chooser, should one look on such a world of pain and pronounce it to be acceptable as long as one does not have to undergo a moment of unbearable pain? Is the suffering one would undergo truly weightier than the suffering the civilisation wold labor under? The above question is arguably unfair both in that I've extended across time without checking acceptability, and also in that I've put the chooser in the position of a sacrificer. For the second part, hopefully it can be resolved by letting it be given that the chooser does not notably value another's suffering above or below the importance of the chooser's own. (Then again, maybe not.) As for time, can an infinite number of different people suffering a certain thing for one second be determined to be at least no less than a single person suffering the same thing for five seconds? If so, then one can hopefully extend suffering in time as well as across numbers, and thus validly reach the 'world of pain versus moment of anguish' situation. (In regard to priveleging, note that dealing with large numbers is known to cause failure of degr
[-]Shmi40

Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).

After digging through my inner utilities, the reason for my "obvious" choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.

In a world where a speck of dust in the eye is a BIG DEAL, because the life... (read more)

3Jack
Eliezer's question for you is "would you give one penny to prevent the 3^^^3 dust specks?"
1ArisKatsaris
And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words "NOT A BIG DEAL", and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind -- are they collectively still not a big deal then? If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?
8Jack
If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.
6Shmi
I stipulated "noticed and not quickly forgotten" would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.
1ArisKatsaris
But each individual dust speck wouldn't be noticeable, and that's each individual agent decides to add - an individual dust speck to the life of each such victim. So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?
-1Shmi
My personal decision theory has no problems dismissing noise-level influences, because they do not matter. You keep trying to replace the original problem with your own: "how many sand specks constitute a heap?" This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.
0ArisKatsaris
That's not one of the guarantees you're given, that a trillion other agents won't be given similar choices. You're not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don't have the guarantee that the decisions of a trillion different agents won't pile up.
2Shmi
Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT'S IT, while you say "you don't have the guarantee that the decisions of a trillion different agents won't pile up". My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors. Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
-4ArisKatsaris
Every election is stolen one vote at a time. My version has also an obvious solution - choosing not to inflict disutility on 3^^^3 people. That's the useful thing about having such an absurdly large number as 3^^^3. We don't really need to calculate it, "3^^^3" just wins. And if you feel it doesn't win, then 3^^^^3 would win. Or 3^^^^^3. Add as many carets as you feel are necessary. Thinking whether the world would be better or worse if everyone decided as you did, is really one of the fundamental methods of ethics, not a random bizarre scenario I just concocted up for this experiment. Point is: If everyone decided as you would, it would pile up, and universes would be doomed to blindness. If everyone decided as I would, they would not pile up.
0[anonymous]
Prove it.
-4Shmi
At this level, so many different low-probability factors come into play (e.g. blinking could be good for you because it reduces incidence of eye problems in some cases), that "choosing not to inflict disutility" relies on an unproven assumption that utility of blinking is always negative, no exceptions. I reject unproven assumptions as torture justifications.
6dlthomas
If the dust speck has a slight tendency to be bad, 3^^^3 wins. If it does not have a slight tendency to be bad, it is not "the least bad bad thing that can happen to someone" - pick something worse for the thought experiment.
0Shmi
Only if you agree to follow EY in consolidating many different utilities in every possible case into one all-encompassing number, something I am yet to be convinced of, but that is beside the point, I suppose. Sure, if you pick something with a guaranteed negative utility and you think that there should be one number to bind them all, I grant your point. However, this is not how the problem appears to me. A single speck in the eye has such an insignificant utility, there is no way to estimate its effects without knowing a lot more about the problem. Basically, I am uncomfortable with the following somewhat implicit assumptions, all of which are required to pick torture over nuisance: * a tiny utility can be reasonably well estimated, even up to a sign * zillions of those utilities can be combined into one single number using a monotonic function * these utilities do not interact in any way that would make their combination change sign * the resulting number is invariably useful for decision making A breakdown in any of these assumptions would mean needless torture of a human being, and I do not have enough confidence in EY's theoretical work to stake my decision on it.
1dlthomas
If you have a preference for some outcomes versus other outcomes, you are effectively assigning a single number to those outcomes. The method of combining these is certainly a viable topic for dispute - I raised that point myself quite recently. It was quite explicitly made a part of the original formulation of the problem. Considering the assumptions you are unwilling to make: As I've been saying, there quite clearly seem to be things that fall in the realm of "I am confident this is typically a bad thing" and "it runs counter to my intuition that I would prefer torture to this, regardless of how many people it applied to". I addressed this at the top of this post. I think it's clear that there must be some means of combining individual preferences into moral judgments, if there is a morality at all. I am not certain that it can be done with the utility numbers alone. I am reasonably certain that it is monotonic - I cannot conceive of a situation where we would prefer some people to be less happy just for the sake of them being less happy. What is needed here is more than just monotonicity, however - it is necessary that it be divergent with fixed utility across infinite people. I raise this point here, and at this point think this is the closest to a reasonable attack on Eliezer's argument. On balance, I think Eliezer is likely to be correct; I do not have sufficient worry that I would stake some percent of 3^^^3 utilons on the contrary and would presently pick torture if I was truly confronted with this situation and didn't have more time to discuss, debate, and analyze. Given that there is insufficient stuff in the universe to make 3^^^3 dust specks, much less the eyes for them to fly into, I am supremely confident that I won't be confronted with this choice any time soon.
2ArisKatsaris
The point of "torture vs specks" is whether enough tiny disutilities can add up to something bigger than a single huge disutility. To argue that specks may on average have positive utility kinda misses the point, because the point we're debating isn't the value of a dust speck, or a sneeze, or a stubbed toe, or an itchy butt, or whatever -- we're just using dust speck as an example of the tiniest bit of disutility you can imagine, but which nonetheless we can agree is disutility. If dust specks don't suit you for this purpose, find another bit of tiny disutility, as tiny as you can make it. (As a sidenote the point is missed on the opposite direction by those who say "well, say there's a one billionth chance of a dust speck causing a fatal accident, you would then be killing untold numbers of people if you inflicted 3^^^^3 specks." -- these people don't add up tiny disutilities, they add up tiny probabilities. They make the right decision in rejecting the specks, but it's not the actual point of the question) Well, I can reject your unproven assumptions as justifications for inflicting disutility on 3^^^3 people, same way that I suppose spammers can excuse billions of spam by saying to themselves "it just takes a second to delete it, so it doesn't hurt anyone much", while not considering that these multiplied means they've wasted billions of seconds from the lives of people...
-1jhuffman
I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn't output the wrong answer on corner cases like this.
7Jack
No. No, that is not the purpose of the article.
0jhuffman
Sorry I've read that and still don't know what it is that I've got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
2MinibearRex
Eliezer disagrees
1Shmi
His point of view is whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero. The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
2MinibearRex
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven't seen any full discussion of it.
2ArisKatsaris
The wrong answer is the people who prefer the specks, because that's the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
6Jack
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm-- if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the "wrong answer" as far as the author is concerned.
1ArisKatsaris
Did the people choosing "specks" ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose "specks"?
2Jack
Most people I didn't, I suppose-- they were asked: Which isn't the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That's a rather non-obvious addition to the thought experiment which is already foreign to everyday experience. In any case it's just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
0ArisKatsaris
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn't truly modify them. Any choice isn't really just about that particular choice, it's about the mechanism you use to arrive at that choice. If people believe that it doesn't matter how many people they each inflict tiny disutilities on, the world ends up worse off.
[-]Jack150

The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.

For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.

Easy. Everyone should have the same answer.

But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.

This is exactly what you're doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.

3see
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind. For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn't matter, because it isn't going to happen.

Sorry I'm late, Anywhere this seems a good place to post my two (not quite) colloaries to the original post:

colloary 1: You can chose either a or b: a) All currently alive humans, including you, will be tortured with superhuman proficiency for a billion years, with certainty. b) There is a 1-in-1 000 000 risk (otherwise nothing happens) that 3^^^3 animals get dust specks in their eyes. These animals have mental attributes that makes them on average worth approximately 1/10^12 as much as a human- Further, the dust specks are so small only those with especia... (read more)

Some considerations:

A dust speck takes a second to remove from your eye. But it is sufficiently painful, unpleasant, or distracting that you will take that second to remove it from your eye, forsaking all other actions or thoughts for that one second. If a typical human today can expect to live for 75 years, then one second is a one-in-2.3-billion part of a life. And that part of that life is indeed taken away from that person; since they surely are not pursuing any other end for the second it takes to remove that dust speck. If all moments of life were co... (read more)

4Jack
I think concentrating specks in one person over the course of her life increases the magnitude of the harm non-linearly.
2fubarobfusco
Yes, it does. But not to the ratio of 3^^^3 over 2.3 billion.

Alternative phrasing of the problem: do you prefer a certain chance of having a dust speck in your eye, or a one-in-3^^^3 chance of being tortured for 50 years?

When you consider that we take action to avoid minor incomforts, but that we don't always take action to avoid small risks of violence or rape etc., we make choices like that much pretty often, with higher chances of bad things happening.

0Jack
Wait. Which side of the rephrasing corresponds to which side of the original?
3Emile
Certain chance of dust speck = 3^^^3 people get dust specks; One-in-3^^^3 chance of torture = one person gets tortured for 50 years. (Just consider a population of 3^^^3, and choose between them all getting dust specks, or one of them getting tortured. If I was in that population, I'd vote for the torture.)
0jpulgarin
This alternate phrasing (considering a population of 3^^^3 and choosing all dust specks vs one tortured) is actually quite a different problem. Since I care much more about my utility than the utility of a random person, then I feel a stronger pull towards giving everyone an extra dust speck as compared to the original phrasing. I think a more accurate rephrasing would be: You will live 3^^^3 consecutive lives (via reincarnation of course). You can choose to get an extra dust speck in your eye in each lifetime, or be tortured in a single random lifetime.
1Emile
I'm not sure how the population-based phrasing changes things. Note that I didn't specify whether the decider is part of that population. And I don't think it even matters whether "I" am part of the population: if I prefer A to B for myself, I should also prefer A to B for others, regardless of how differently I weight my welfare vs. their welfare.
0jpulgarin
You're right, for some reason I thought the decider was part of the population. I've also updated towards choosing torture if I were part of that population.

An interesting related question would be: What would people in a big population Q choose if given alternatives: extreme pain with probability p=1/Q or tiny pain with probability p=1. In the framework of expected utility theory you'd have to include not only the sizes of the pains and size of populations but also the risk aversion of the person asked. So its not only about adding up small utilities.

[-][anonymous]00

Perhaps the answer is that there are multiple hierarchies of [dis]utility, for instance: n dust specks (where n is less than enough to permanently damage the eye or equate to a minimal pain unit) is hierarchy 1, a slap in the face is hierarchy 3, torture is hierarchy 50 (these numbers are just an arbitrary example) and the [dis]utility at hierarchy x+1 is infinitely worse than the [dis]utility at hierarchy x. Adding dust specks to more people won't increase the hierarchy, but adding more dust specks to the same person eventually will.

I just noticed this argument, I hope I'm not too late in expressing my view.

Premise: I want to live in the universe with the least amount of pain.

And now for some calculations. For the sake of quantification, let's assume that that the single tortured person will receive 1 whiplash per second, continuously, for 50 years. Let's also assume that the pain of 1 whiplash is equivalent to 1 "pain unit". Thus, if I chose to torture that person, I would add 3600 "pain units" per hour to the total amount of pain in the universe. In 1 day, the am... (read more)

One way to think about this is to focus on how small one person is compared to 3^^^3 people. You're unlikely to notice the dust speck each person feels, but you're much, much less likely to notice the one person being tortured against a background of 3^^^3 people. You could spend a trillion years searching at a rate of one galaxy per Planck time and you won't have any realistic chance of finding the person being tortured.

Of course, you noticed the person being tortured because they were mentioned in only a few paragraphs of text. It makes them more noticeable. It doesn't make them more important. Every individual is important. All 3^^^3 of them.

If Omega tells you that he will give either 1¢ each to 3^^^3 random people or $100,000,000,000.00 to the SIAI, and that you get to choose which course of action he should take, what would you do? That's a giant amount of distributed utility vs a (relatively) modest amount of concentrated utility.

I suspect that part of the exercise is not to outsmart yourself.

1ArisKatsaris
Let me note for a sec some not-true-objections: (a) A single cent coin is more of a disutility for me, considering value vs space it takes in my wallet. (b) Adding money to the economy doesn't automatically increase the value anyone can use. (c) Bad and stupid people having more money would be actually of negative utility, as they'd give the money to bad and stupid causes. (d) Perhaps FAI is the one scenario which truly outweighs even 3^^^3 utilons. Now for the true reason: I'd choose the money going to SIAI, but that'd be strictly selfish/tribal thinking, because I live in the planet which SIAI has some chance of improving, and so the true calculation would be about 7 billion people getting a coin each, not 3^^^3 people getting a coin each. If my utility function was truly universal in scope, the 3^^^3 cents (barring not-true objections noted above) would be the correct choice.

My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.

But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.

If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there's a bug in my utility function.

3VincentYu
If I drink 10 liters of water in an hour, I will die from water intoxication, which is bad. But this doesn't mean that drinking water is always bad - on the contrary, I think we'll agree that drinking some water every once in a while is good. Utility functions don't have to be linear - or even monotonic - over repeated actions. With that said, I agree with your conclusion that a single dust speck has non-zero (in particular, positive) badness.
2lavalamp
You know what? You are absolutely right. If the background rate at which dust specks enter eyes is, say, once per day, then an additional dust speck is barely even noticeable. The 3^^^3 people probably wouldn't even be able to tell that they got an "extra" dust speck, even if they were keeping an excel spreadsheet and making entries every time they got a dust speck in their eye, and running relevant statistics on it. I think I just switched back to SPECKS. If a person can't be sure that something even happened to them, my utility function is rounding it off to zero.
2AlexSchell
This may be already obvious to you, but such a utility function is incoherent (as made vivid by examples like the self-torturer).
1occlude
I expect that more than one of my brain modules are trying to judge between incompatible conclusions, and selectively giving attention to the inputs of the problem. My thinking was similar to yours -- it feels less like I'm applying scope insensitivity and more like I'm rounding the disutility of specks down due to their ubiquity, or their severity relative to torture, or the fact that the effects are so dispersed. If one situation goes unnoticed, lost in the background noise, while another irreparably damages someone's mind, then that should have some impact on the utility function. My intuition tells me that this justifies rounding the impact of a speck down to zero, that the difference is a difference of kind, not of degree, that I should treat these as fundamentally different. At the same time, like Vincent, I'm inclined to assign non-zero disutility value to a speck. One brain, two modules, two incompatible judgements. I'm willing to entertain the possibility that this is a bug. But I'm not ready yet to declare one module the victor.