[SEQ RERUN] Torture vs. Dust Specks
Today's post, Torture vs. Dust Specks was originally published on 30 October 2007. A summary (taken from the LW wiki):
If you had to choose between torturing one person horribly for 50 years, or putting a single dust speck into the eyes of 3^^^3 people, what would you do?
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Motivated Stopping and Motivated Continuation, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.
Loading…
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
Comments (83)
I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don't have instructions to deal with such cases by assigning a dust spec a value of -1 util and torture a very low but > -3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way -- because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my 'utility function' is buggy but my 'utility function' says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.
The claim that the answer is "obvious" is troubling.
Another way to reach the conclusion that dust specks are worse is by transitivity. Consider something that is slightly worse than getting a dust speck in your eye. For instance, maybe hearing the annoying sound of static on television is just a bit worse, as long as it's relatively brief and low volume. Now,
1a. Which is worse: everyone on Earth gets a dust speck in their eye, or one person hears a second of the annoying sound of static on a television with the volume set at a fairly low level [presumably you think that the dust specks are worse]
1b. Which is worse: one person briefly hears static, or 7 billion people each get a dust speck [generalizing 1a, to not depend on population of Earth or fact that it's "everyone"]
1c. Which is worse: n people briefly hear static, or (7 billion) x n people get a dust speck [generalizing 1b, equivalent to repeating 1b n times]
Now, consider something that is slightly worse than the static (or whatever you picked). For instance, maybe someone lightly flicking their finger into the palm of your hand is a bit more unpleasant.
2a. Which is worse: everyone on Earth hears a second of the annoying sound of fairly low volume static, or one person gets lightly flicked in the palm, the sensation of which fades entirely within a few seconds
2b. Which is worse: one person gets lightly flicked in the palm, or 7 billion people each briefly hear static
2c. Which is worse: n people get lightly flicked in the palm, or (7 billion) x n people each briefly hear static
2d. Which is worse: n people get lightly flicked in the palm, or (7 billion)^2 x n people get a dust speck [transitivity, from 1c & 2c]
And keep gradually increasing the badness of the alternative, with dust specks remaining the worse option (by transitivity), until you get to:
10000a. Which is worse: everyone on Earth gets 25 years of torture, or one person gets 50 years of torture
10000b. Which is worse: one person gets 50 years of torture, or 7 billion people each get 25 years of torture
10000c. Which is worse: n people get 50 years of torture, or (7 billion) x n people get 25 years of torture
10000d. Which is worse: n people get 50 years of torture, or (7 billion)^10000 x n people get a dust speck
10000e. Which is worse: 1 person gets 50 years of torture, or 3^^^3 people get a dust speck [from 10000d, letting n=1 and drastically increasing the number of dust specks]
If you think that torture is worse than dust specks, at what step do you not go along with the reasoning?
When I first read Eliezer's post on this subject, I was confused by this transitivity argument. It seems reasonable. But even at that point, I questioned the idea that if all of the steps as you outline them seem individually reasonable, but torture instead of dust specks seems unreasonable, it is "obvious" that I should privilege the former output of my value computation over the latter.
My position now is that in fact, thinking carefully about the steps of gradually increasing pain, there will be at least one that I object to (but it's easy to miss because the step isn't actually written down). There is a degree of pain that I experience that is tolerable. Ouch! That's painful. There is an infinitesimally greater degree of pain (although the precise point at which this occurs, in terms of physical causes, depends on my mood or brain state at that particular time) that is just too much. Curses to this pain! I cannot bear this pain!
This seems like a reasonable candidate for the step at which I stop you and say no, actually I would prefer any number of people to experience the former pain, rather than one having to bear the latter - that difference just barely exceeded my basic tolerance for pain. Of course we are talking about the same subjective level of pain in different people - not necessarily caused by the same severity of physical incident.
This doesn't seem ideal. However, it is more compatible with my value computation than the idea of torturing someone for the sake of 3^^^3 people with dust specks in their eyes.
I can somewhat sympathise, in that when removing a plaster I prefer to remove it slowly, for a longer bearable pain, than quickly for a brief unbearable pain. However, this can only be extended so far: there is a set (expected) length of continuing bearable pain over which one would choose to eliminate the entire thing with brief unbearable pain, as with tooth disease and (hypothetical) dentistry, or unpleasant-but-survival-illness and (phobic) vaccination.
'prefer any number of people to experience the former pain, rather than one having to bear the latter': applying to across time as well as across numbers, one can reach the state of comparing {one person suffering brief unbearable pain} to {a world of pain, every person constantly existing just at the theshold at which it's possible to not go insane}. Somewhat selfishly casting oneself in the position of potential sufferer and chooser, should one look on such a world of pain and pronounce it to be acceptable as long as one does not have to undergo a moment of unbearable pain? Is the suffering one would undergo truly weightier than the suffering the civilisation wold labor under?
The above question is arguably unfair both in that I've extended across time without checking acceptability, and also in that I've put the chooser in the position of a sacrificer. For the second part, hopefully it can be resolved by letting it be given that the chooser does not notably value another's suffering above or below the importance of the chooser's own. (Then again, maybe not.)
As for time, can an infinite number of different people suffering a certain thing for one second be determined to be at least no less than a single person suffering the same thing for five seconds? If so, then one can hopefully extend suffering in time as well as across numbers, and thus validly reach the 'world of pain versus moment of anguish' situation.
(In regard to priveleging, note that dealing with large numbers is known to cause failure of degree appreciation due to the brain's limitations, whereas induction tends to be reliable.)
Here's a good way of looking at the problem.
Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
Suppose I find someone, Bob, with the same baseline happiness as the girl we're suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.
To put numbers on this:
Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, ... Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, ... Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, ... Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, ... Zack has 999,999,999,999, Alice has 1,000,000,000,000.
It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.
Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it's like torturing two people. Add in another trillion, and it's worse still. You get the idea.
...
If Yudkowsky had set up his thought experiment in this way, I would agree with him. But I don't believe there's any reason to expect there to be a distribution of pain in the way that you describe - or in any case it seems like Yudkowsky's point should generalise, and I'm not sure that it does.
If all 3^^^3 + 1 people are on the pain level of 0, and then I have the choice of bringing them all up to pain level 1 or leaving 3^^^3 of them on pain level 0 and bringing one of them up to pain level 1,000,000,000,000 - I would choose the former.
I may have increased the number of pain units in existence, but my value computation doesn't work by adding up "pain units". I'm almost entirely unconcerned about 3^^^3 people experiencing pain level 1; they haven't reached my threshold for caring about the pain they are experiencing. On the other hand, the individual being tortured is way above this threshold and so I do care about him.
I don't know where the threshold(s) are, but I'm sure that if my brain was examined closely there would be some arbitrary points at which it decides that someone else's pain level has become intolerable. Since these jumps are arbitrary, this would seem to break the idea that "pain units" are additive.
Is the distribution necessary (other than as a thought experiment)?
Simplifying to a 0->3 case: If changing (in the entire universe, say) all 0->1, all 1->2, and all 2->3 is judged as worse than changing one person's 0->3 --for the reason that, for an even distrubution, the 1s and 2s would stay the same number and the 3s would increase with the 1s decreasing-- then for what hypothetical distribution would it be even worse and for what hypothetical distribution would it be less bad? Is it worse if there are only 0s who all become 1s, or is it worse if there are only 2s who all become 3s? Is a dust speck classed as worse if you do it to someone being tortured than someone in a normal life or vice versa, or is it just as bad no matter what the distribution in which case the distribution is unimportant?
...then again, if one weighs matters solely on magnitude of individual change, then that greater difference can appear and disappear like a mirage when one shifts back and forth considering those involved collectively or reductionalistically... hrm. | Intuitively speaking, it seems inconsistent to state that 4A, 4B and 4C are acceptable, but A+B+C is not acceptable (where A is N people 0->1, B is N 1->2, C is N 2->3).
...the aim of the even distribution example is perhaps to show that by the magnitude-difference measurement the outcome can be worse, then break it down to show that for uneven cases too the suffering inflicted is equivalent and so for consistency one must continue to view it as worse...
(Again, this time shifting it to a 0-1-2, why would it be {unacceptable for N people to be 1->2 if and only if N people were also 0->1, but not unacceptable for N people to be 1->2 if 2N more people were 1->2} /and also/ {unacceptable for N people to be 0->1 if and only if N people ere also 1->2, but not unacceptable for N people to be 0->1 if 2N more people were 0->1}?)
The arbitrary points concept, rather than a smooth gradient, is also a reasonable point to consider. For a smooth gradient, the more pain anothe person is going through the more objectionable it is. For an arbitrary threshold, one could not find someone greatly to be an objectionable thing, yet find someone else suffering by a negligible amount more to be a significantly objectionable thing. Officially adopting such a cut-off point for sympathy--particularly one based on an arbitrarily-arrived-at brain structure rather than well-founded ethical/moral reasoning--would seem to be incompatible with true benevolence and desire for others' well-being, suggesting that even if such arbitrary thresholds exist we should aim to act as though they did not.
(In other words, if we know that we are liable to not scale our contribution depending on the scale of (the results of) what we're contributing towards, we should aim to take that into account and deliberately, manually, impose the scaling that otherwise would have been left out of our considerations. In this situation, if as a rule of thumb we tend to ignore low suffering and pay attention to high suffering, we should take care to acknowledge the unpleasantness of all suffering and act appropriately when considering decisions that could control such suffering.
(Preferable to not look back in the future and realise that, because of overreliance on hardwired rules of thumb, one had taken actions which betrayed one's true system of values. If deliberately rewiring one's brain to eliminate the cut-off crutches, say, one would hopefully prefer to at that time not be horrified by one's previous actions, but rather be pleased at how much easier taking the same actions has become. Undesirable to resign oneself to being a slave of one's default behaviour.)
Why would they all be at pain number zero? I'd expect them to be randomly distributed in all their traits unless specified otherwise. If I give them a mean pain of zero and a standard deviation of 1, there'd be no shortage of people with a pain level of 1,000,000,000,000. The same goes with any reasonable distribtion.
If you play around with my paradox a bit more, you can work out that if you have 1,000,000,000,000 people at pain level n, and one person at pain level zero, there must be some n between 0 and 999,999,999,999 such that it's at least as bad to torture the one person as to give the rest dust specks.
Where is the marginal disutility like that? If you have 1,000,000,000 people at pain 999,999,999,999, and one at pain 0, would you rather torture the one, or give the 1,000,000,000,000 dust specks?
I would expect a cutoff like this would be an approximation. You'd actually think that the marginal disutility of pain starts out at zero, and steadily increases until it approaches one. If this were true, one dust speck would bring the pain to 1, which would make the marginal disutility slightly above zero, so that would have some tiny amount of badness. If you multiply it by 3^^^3, now it's unimaginable.
It's a thought experiment. The whole scenario is utterly far-fetched, so there's no use in arguing that this or that detail of the thought experiment is what we should "expect" to find.
As such, I choose the version of the thought experiment that best teases out the dilemma that Yudkowsky is trying to explore, which concerns the question of whether we should consider pain to be denominated all in the same units - i.e. 3^^^3 x miniscule pain > 1 x torture - in our moral calculations.
EDIT: in response to the rest of your comment, see my reply to "Unnamed".
To get Eliezer's point, make the world more inconvenient. 3^^^3 people all with equivalent pain tolerances to you getting dust specks in their eyes, or torture one person for 50 years.
I believe the problem with this, is that you have given actual values (pain units), and equated the two levels of "torture" outlined in the original thought experiment. Specifically, equating one trillion humans with dust speck in eye and Alice being tortured.
So, what's the problem? Is a dust speck incomparable to torture? A dust speck is comparable to something slightly worse than a dust speck, which is comparable to something slightly worse than that, etc. At some point, you'll compare dust specks to torture. You may not live long enough to follow that out explicitly, just like you could never start with one grain of sand and keep adding them one at a time to get a beach, but the comparison still exists.
No comparison exists if, as I mentioned in my other post, the fleeting discomfort is lost in the noise of other minor nuisances and has no lasting effect. One blink, and the whole thing is forgotten forever, quickly replaced by an itch in your bum, flickering fluorescent light overhead, your roommate coughing loudly, or an annoying comment on LW.
One speck of sand will be lost in a beach, but adding a speck of sand will still make it a bigger beach, and adding 3^^^3 specks of sand will make it a black hole.
You notice it while it's happening. You forget about it eventually, but even if you were tortured for 3^^^3 years before finally dying, you'd forget it all the moment you die.
I consider it a faulty analogy. Here is one I like better: if the said speck of dust disintegrates into nothing after an instant, there is no bigger beach and no black hole.
If you consider the disutility of the dust speck zero, because the brief annoyance will be forgotten, then can the disutility of the torture also be made into zero, if we merely add the stipulation that the tortured person will then have the memory of this torture completely erased and the state of their mind reverted to what it had been before the torture?
This is an interesting question, but it seems to be in a different realm. For example, it could be reformulated as follows: is this 50-year torture option that bad if it is parceled into 1 second chunks and any memory of each one is erased immediately, and it has no lasting side effects.
For the purpose of this discussion, I assume that it is 50 dismal years with all the memories associated and accumulated all the way through and thereafter. In that sense it is qualitatively in a different category than a dust speck. This might not be yours (or EY's) interpretation.
6 × 10^30 kilograms of sand on one beach on one inhabited planet will collapse it into a black hole, which is far, far smaller amount of mass than 3^^^3 molecules of silicon dioxide. But adding one molecule of silicon dioxide to each of 3^^^3 beaches on inhabited planets throughout as many universes as necessary seems to cause far less disutility than adding 6 × 10^30 kilograms of sand to one beach on one inhabited planet.
Is the problem that we're unable to do math? You can't possibly say one molecule of silicon dioxide is incomparable to 6 × 10^30 kilograms of sand, can you? They're indisputably the same substance, after all; 6 × 10^55 molecules of SiO2 is 6 × 10^30 kilograms of sand. Even if you make the disutility nonlinear, you have to do something really, really extreme to overcome 3^^^3 . . . and of you do that, why, let's substitute in 3^^^^3 or 3^^^^^3 instead.
Is the problem that we are failing to evaluate what happens if everybody else makes the same decision? If 6 × 10^55 people were given the decision and they all chose the molecule, 3^^^3 inhabited planets are converted into black holes, while if they made the other only 6 × 10^55 planets would be. So when faced with an option that seems to cause no disutility, must we annihilate seven billion people because it would if enough other people made our decision it would be far worse than if we and all of them made the other?
My point wasn't so much that it will cause a black hole, as that a tiny amount of disutility times 3^^^3 is going to be unimaginably horrible, regardless of how small 3^^^3.
That's not the problem at all. Thinking about that is a good sanity check.If it's good to make that decision once it's better to make it 10^30 times. However, it's only a sanity check. Everybody isn't going to make the same decision as you, so there's no reason to assume they will.
The original thought experiment is used to provide a pure example of quantifying and comparing arbitrary levels of suffering as a test to see whether we support such a type of utilitarian consequentialism.
By comparing torture to torture, you are changing the scenario to test a slightly weaker version of the original type of utilitarian consequentialism where you do quantify and compare arbitrary changes to absolute levels of suffering with arbitrary absolute levels of suffering but not necessarily allowing the two instances of absolute levels of suffering to be arbitrary with respect to each other.
If anyone could rewrite this comment to be comprehensible I would appreciate it.
Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).
After digging through my inner utilities, the reason for my "obvious" choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.
In a world where a speck of dust in the eye is a BIG DEAL, because the life is otherwise so PERFECT, even one speck is noticed and not quickly forgotten, such occurrences can be accumulated and compared with torture. However, this was not specified in the original problem, so I assume that people live through the calamities of the speck of dust magnitude all the time, and adding one more changes nothing.
Eliezer's question for you is "would you give one penny to prevent the 3^^^3 dust specks?"
I think the purpose of this article is to point to some intuitive failures of a simple linear utility function. In other words, probably everyone who reads it agrees with you. The real challenge is in creating a utility function that wouldn't output the wrong answer on corner cases like this.
No. No, that is not the purpose of the article.
Sorry I've read that and still don't know what it is that I've got wrong. Does this article not indicate a problem with simple linear utility functions, or is that not its purpose?
Eliezer disagrees
His point of view is
whereas myself and many others appeal to zero-aggregation, which indeed reduces any finite number (and hence the limit when this aggregation is taken to infinity) to zero.
The distinction is not that of rationality vs irrationality (e.g. scope insensitivity), but of the problem setup.
If you can explain zero aggregation in more detail, or point me to a reference, that would be appreciated, since I haven't seen any full discussion of it.
The wrong answer is the people who prefer the specks, because that's the answer which, if a trillion people answered that way, would condemn whole universes to blindness (instead of a mere trillion beings to torture).
Only if you assume that the dust speck decisions must be made in utter ignorance of the (trillion-1) other decisions. If the ignorance is less than utter, a nonlinear utility function that accepts the one dust speck will stop making the decision in favor of dust specks before universes go blind.
For example, since I know how Texas will vote for President next year (it will give its Electoral College votes to the Republican), I can instead use my vote to signal which minor-party candidate strikes me as the most attractive, thus promoting his party relative to the others, without having to worry whether my vote will elect him or cost my preferred candidate the election. Obviously, if everyone else in Texas did the same, some minor party candidate would win, but that doesn't matter, because it isn't going to happen.
Adding multiple dust specks to the same people definitely removes the linear character of the dust speck harm-- if you take the number of dust specks necessary to make someone blind and spread them out to a lot more people you drastically reduce the total harm. So that is not an appropriate way of reformulating the question. You are correct that the specks are the "wrong answer" as far as the author is concerned.
Did the people choosing "specks" ask whether the persons in question would have suffered other dust specks (or sneezes, hiccups, stubbed toes, etc) immediately previous by potentially other agents deciding as they did, when they chose "specks"?
Most people I didn't, I suppose-- they were asked:
Which isn't the same as asking what people would do if they were given the power to choose one or the other. And even if people were asked that the latter is plausible they would not assume the existence of a trillion other agents making the same decision over the same set of people. That's a rather non-obvious addition to the thought experiment which is already foreign to everyday experience.
In any case it's just not the point of the thought experiment. Take the least convenient possible world: do you still choose torture if you know for sure there are no other agents choosing as you are over the same set of people?
Yes. The consideration of how the world would look like if everyone chose the same as me, is a useful intuition pumper, but it just illustrates the ethics of the situation, it doesn't truly modify them.
Any choice isn't really just about that particular choice, it's about the mechanism you use to arrive at that choice. If people believe that it doesn't matter how many people they each inflict tiny disutilities on, the world ends up worse off.
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you're doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
And tell me, in a universe where a trillion agents individually decide that adding a dust of speck to the lives of 3^^^3 people is in your words "NOT A BIG DEAL", and the end result is that you personally end up with a trillion specks of dust (each of them individually NOT A BIG DEAL), which leave you (and entire multiverses of beings) effectively blind -- are they collectively still not a big deal then?
If it will be a big deal in such a scenario, then can you tell me which ones of the above trillion agents should have preferred to go with torturing a single person instead, and how they would be able to modify their decision theory to serve that purpose, if they individually must choose the specks but collectively must choose the torture (lest they leave entire multiverses and omniverses entirely blind)?
If you have reason to suspect a trillion people are making the same decision over the same set of people the calculation changes since dust specks in the same eye do not scale linearly.
I stipulated "noticed and not quickly forgotten" would be my condition for considering the other choice. Certainly being buried under a mountain of sand would qualify as noticeable by the unfortunate recipient.
But each individual dust speck wouldn't be noticeable, and that's each individual agent decides to add - an individual dust speck to the life of each such victim.
So, again, what decision theory can somehow dismiss the individual effect as you would have it do, and yet take into account the collective effect?
My personal decision theory has no problems dismissing noise-level influences, because they do not matter.
You keep trying to replace the original problem with your own: "how many sand specks constitute a heap?" This is not at issue here, as no heap is ever formed for any single one of the 3^^^3 people.
That's not one of the guarantees you're given, that a trillion other agents won't be given similar choices. You're not given the guarantee that your dilemma between minute disutility for astronomical numbers, and a single huge disutility will be the only such dilemma anyone will ever have in the history of the universe, and you don't have the guarantee that the decisions of a trillion different agents won't pile up.
Well, it looks like we found the root of our disagreement: I take the original problem literally, one blink and THAT'S IT, while you say "you don't have the guarantee that the decisions of a trillion different agents won't pile up".
My version has an obvious solution (no torture), while yours has to be analyzed in detail for every possible potential pile up, and the impact has to be carefully calculated based on its probability, the number of people involved, and any other conceivable and inconceivable (i.e. at the probability level of 1/3^^^3) factors.
Until and unless there is a compelling evidence of an inevitable pile-up, I pick the no-torture solution. Feel free to prove that in a large chunk (>50%?) of all the impossible possible worlds the pile-up happens, and I will be happy to reevaluate my answer.
If Omega tells you that he will give either 1¢ each to 3^^^3 random people or $100,000,000,000.00 to the SIAI, and that you get to choose which course of action he should take, what would you do? That's a giant amount of distributed utility vs a (relatively) modest amount of concentrated utility.
I suspect that part of the exercise is not to outsmart yourself.
Let me note for a sec some not-true-objections: (a) A single cent coin is more of a disutility for me, considering value vs space it takes in my wallet. (b) Adding money to the economy doesn't automatically increase the value anyone can use. (c) Bad and stupid people having more money would be actually of negative utility, as they'd give the money to bad and stupid causes. (d) Perhaps FAI is the one scenario which truly outweighs even 3^^^3 utilons.
Now for the true reason: I'd choose the money going to SIAI, but that'd be strictly selfish/tribal thinking, because I live in the planet which SIAI has some chance of improving, and so the true calculation would be about 7 billion people getting a coin each, not 3^^^3 people getting a coin each. If my utility function was truly universal in scope, the 3^^^3 cents (barring not-true objections noted above) would be the correct choice.
An interesting related question would be: What would people in a big population Q choose if given alternatives: extreme pain with probability p=1/Q or tiny pain with probability p=1. In the framework of expected utility theory you'd have to include not only the sizes of the pains and size of populations but also the risk aversion of the person asked. So its not only about adding up small utilities.
Some considerations:
A dust speck takes a second to remove from your eye. But it is sufficiently painful, unpleasant, or distracting that you will take that second to remove it from your eye, forsaking all other actions or thoughts for that one second. If a typical human today can expect to live for 75 years, then one second is a one-in-2.3-billion part of a life. And that part of that life is indeed taken away from that person; since they surely are not pursuing any other end for the second it takes to remove that dust speck. If all moments of life were considered equal, then 2.3 billion dust specks would be the equal to one life spent entirely dealing with constant — but instant, which is to say, memoryless — moments of unpleasant distraction.
One of the things that is distracting about the word "torture" is that in our world, torture is something that is inflicted by some person. Someone in agonizing pain from, say, cancer, is not literally being tortured; that is, no agent chose to put that person in that situation. Human values consider the badness of an agent's intentional, malicious action to be worse than the equivalent consequence caused by non-agent phenomena. Torture implies a torturer.
It seems to me that one distinction between suffering and pain is that suffering includes a term for the knowledge that I am being diminished by what is happening to me: it is not merely negative utility, but undermines my ability to seek utility. Torture — actual torture — has further negative consequences after the torture itself is over: in diminution of the victim's physical and psychological health, alteration of their values and other aspects of their psyche. To ask me to envision "50 years of torture" followed by no further negative consequence is to ask me to envision something so contrary to fact as to become morally misleading in and of itself.
So rather than "torture vs. dust specks", if we say "fifty years of constant, memoryless, unpleasant distraction vs. 3^^^3 dust specks", then I would certainly favor DISTRACTION over SPECKS.
I think concentrating specks in one person over the course of her life increases the magnitude of the harm non-linearly.
Yes, it does. But not to the ratio of 3^^^3 over 2.3 billion.
Sorry I'm late, Anywhere this seems a good place to post my two (not quite) colloaries to the original post:
colloary 1: You can chose either a or b: a) All currently alive humans, including you, will be tortured with superhuman proficiency for a billion years, with certainty. b) There is a 1-in-1 000 000 risk (otherwise nothing happens) that 3^^^3 animals get dust specks in their eyes. These animals have mental attributes that makes them on average worth approximately 1/10^12 as much as a human- Further, the dust specks are so small only those with especially sensitive eyes (about 1 in a million) can even notice it.
not--a-colloary 2: Choices are as follows: a) nothing happens b) 3^^^3 humans get tortured for 3^^^3 years, and there's a 1/3^^^3 chance a friendly Ai is released into our universe and turns out to be able to travel to any number of other universe and persist in the multiverse creating Fun for eternity.
Alternative phrasing of the problem: do you prefer a certain chance of having a dust speck in your eye, or a one-in-3^^^3 chance of being tortured for 50 years?
When you consider that we take action to avoid minor incomforts, but that we don't always take action to avoid small risks of violence or rape etc., we make choices like that much pretty often, with higher chances of bad things happening.
Wait. Which side of the rephrasing corresponds to which side of the original?
Certain chance of dust speck = 3^^^3 people get dust specks;
One-in-3^^^3 chance of torture = one person gets tortured for 50 years.
(Just consider a population of 3^^^3, and choose between them all getting dust specks, or one of them getting tortured. If I was in that population, I'd vote for the torture.)
This alternate phrasing (considering a population of 3^^^3 and choosing all dust specks vs one tortured) is actually quite a different problem. Since I care much more about my utility than the utility of a random person, then I feel a stronger pull towards giving everyone an extra dust speck as compared to the original phrasing.
I think a more accurate rephrasing would be: You will live 3^^^3 consecutive lives (via reincarnation of course). You can choose to get an extra dust speck in your eye in each lifetime, or be tortured in a single random lifetime.
I'm not sure how the population-based phrasing changes things. Note that I didn't specify whether the decider is part of that population.
And I don't think it even matters whether "I" am part of the population: if I prefer A to B for myself, I should also prefer A to B for others, regardless of how differently I weight my welfare vs. their welfare.
You're right, for some reason I thought the decider was part of the population.
I've also updated towards choosing torture if I were part of that population.
Perhaps the answer is that there are multiple hierarchies of [dis]utility, for instance: n dust specks (where n is less than enough to permanently damage the eye or equate to a minimal pain unit) is hierarchy 1, a slap in the face is hierarchy 3, torture is hierarchy 50 (these numbers are just an arbitrary example) and the [dis]utility at hierarchy x+1 is infinitely worse than the [dis]utility at hierarchy x. Adding dust specks to more people won't increase the hierarchy, but adding more dust specks to the same person eventually will.
I just noticed this argument, I hope I'm not too late in expressing my view.
Premise: I want to live in the universe with the least amount of pain.
And now for some calculations. For the sake of quantification, let's assume that that the single tortured person will receive 1 whiplash per second, continuously, for 50 years. Let's also assume that the pain of 1 whiplash is equivalent to 1 "pain unit". Thus, if I chose to torture that person, I would add 3600 "pain units" per hour to the total amount of pain in the universe. In 1 day, the amount of pain in the universe would increase of 360024 = 110400 pain units. In 1 year, approximately 110400365+36006 = 40317600 pain units. In 50 years, approximately 4031760050 = 2015880000 pain units. And now, let's examine the specks. They were described as "barely enough to make you notice before you blink and wipe away the dust speck". In other words, while they can be felt, the sensation is insufficient to trigger the nociceptors. This means that each speck increases the level of pain in the universe of 0 pain units. So, if 3^^^3 people received each a dust speck in one of their eyes, the amount of pain in the universe would increase of exactly 0*3^^^3 = 0 pain units! This is why I would definitely choose SPECKS.
One way to think about this is to focus on how small one person is compared to 3^^^3 people. You're unlikely to notice the dust speck each person feels, but you're much, much less likely to notice the one person being tortured against a background of 3^^^3 people. You could spend a trillion years searching at a rate of one galaxy per Planck time and you won't have any realistic chance of finding the person being tortured.
Of course, you noticed the person being tortured because they were mentioned in only a few paragraphs of text. It makes them more noticeable. It doesn't make them more important. Every individual is important. All 3^^^3 of them.
My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.
But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.
If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there's a bug in my utility function.
If I drink 10 liters of water in an hour, I will die from water intoxication, which is bad. But this doesn't mean that drinking water is always bad - on the contrary, I think we'll agree that drinking some water every once in a while is good.
Utility functions don't have to be linear - or even monotonic - over repeated actions.
With that said, I agree with your conclusion that a single dust speck has non-zero (in particular, positive) badness.
You know what? You are absolutely right.
If the background rate at which dust specks enter eyes is, say, once per day, then an additional dust speck is barely even noticeable. The 3^^^3 people probably wouldn't even be able to tell that they got an "extra" dust speck, even if they were keeping an excel spreadsheet and making entries every time they got a dust speck in their eye, and running relevant statistics on it. I think I just switched back to SPECKS. If a person can't be sure that something even happened to them, my utility function is rounding it off to zero.
This may be already obvious to you, but such a utility function is incoherent (as made vivid by examples like the self-torturer).
I expect that more than one of my brain modules are trying to judge between incompatible conclusions, and selectively giving attention to the inputs of the problem.
My thinking was similar to yours -- it feels less like I'm applying scope insensitivity and more like I'm rounding the disutility of specks down due to their ubiquity, or their severity relative to torture, or the fact that the effects are so dispersed. If one situation goes unnoticed, lost in the background noise, while another irreparably damages someone's mind, then that should have some impact on the utility function. My intuition tells me that this justifies rounding the impact of a speck down to zero, that the difference is a difference of kind, not of degree, that I should treat these as fundamentally different. At the same time, like Vincent, I'm inclined to assign non-zero disutility value to a speck.
One brain, two modules, two incompatible judgements. I'm willing to entertain the possibility that this is a bug. But I'm not ready yet to declare one module the victor.