I still have trouble seeing where people are coming from on this. My moral judgment software does not accept 3^^^3 dust specs as an input. And I don't have instructions to deal with such cases by assigning a dust spec a value of -1 util and torture a very low but > -3^^^3 util count. I recognize my brain is just not equipped to deal with such numbers and I am comfortable adjusting my empirical beliefs involving incomprehensibly large numbers in order to compensate for bias. But I am not comfortable adjusting my moral judgments in this way -- because while I have a model of an ideally rational agent I do not have a model of an ideally moral agent and I am deeply skeptical that one exists. In other words, I recognize my 'utility function' is buggy but my 'utility function' says I should keep the bugs since otherwise I might no longer act in the buggy way that constitutes ethical behavior.
The claim that the answer is "obvious" is troubling.
Here's a good way of looking at the problem.
Presumably, there's going to be some variation with how the people are feeling. Given 3^^^3 people, this will mean that I can pretty much find someone under any given amount of pleasure/pain.
Suppose I find someone, Bob, with the same baseline happiness as the girl we're suggesting torturing, Alice. I put a speck of dust in his eye. I then find someone with this nigh infinitesimally worse baseline, Charlie, and do it again. I keep this up until I get to a guy, Zack, that, after putting the dust speck in his eye, is at the same happiness as the guy we would be torturing if he is tortured.
To put numbers on this:
Alice and Bob have a base pain of 0, Charlie has 1, Dianne has 2, ... Zack has 999,999,999,999. I then add one unit of pain to each person. Now Alice has 0, Bob has 1, Charlie has 2, ... Yaana has 999,999,999,999, Zack has 1,000,000,000,000. I could instead torture one person. Alice has 1,000,000,000,000, Bob has 0, Charlie has 1, ... Zack has 999,999,999,999. In other words, Bob has 0, Charlie has 1, Diane has 2, ... Zack has 999,999,999,999, Alice has 1,000,000,000,000.
It's the same numbers both ways -- just different people. The only way you could decide which is better is if you care more or less than average about Alice.
Of course, this is just using 1,000,000,000,000 of 3^^^3 people. Add in another trillion, and now it's like torturing two people. Add in another trillion, and it's worse still. You get the idea.
Another way to reach the conclusion that dust specks are worse is by transitivity. Consider something that is slightly worse than getting a dust speck in your eye. For instance, maybe hearing the annoying sound of static on television is just a bit worse, as long as it's relatively brief and low volume. Now,
1a. Which is worse: everyone on Earth gets a dust speck in their eye, or one person hears a second of the annoying sound of static on a television with the volume set at a fairly low level [presumably you think that the dust specks are worse]
1b. Whic...
Color me irrational, but in the problem as stated (a dust speck is a minor inconvenience, with zero chance of other consequences, unlike what some commenters suggest), there is no number of specks large enough to outweigh lasting torture (which ought to be properly defined, of course).
After digging through my inner utilities, the reason for my "obvious" choice is that everyone goes through minor annoyances all the time, and another speck of dust would be lost in the noise.
In a world where a speck of dust in the eye is a BIG DEAL, because the life...
The point of the article is to illustrate scope insensitivity in the human utility function. Turning the problem into a collective action problem or an acausal decision theory problem by adding additional details to the hypothetical is not a useful intuition pump since it changes the entire character of the question.
For example, consider the following choice: You can give a gram of chocolate to 3^^^3 children who have never had chocolate before. Or you can torture someone for 50 years.
Easy. Everyone should have the same answer.
But wait! You forgot to consider that trillions of other people were being given the same choice! Now 3^^^3 children have diabetes.
This is exactly what you're doing with your intuition pump except the value of eating additional chocolate inverts at a certain point whereas dust specks in your eye get exponentially worse at a certain point. In both cases the utility function is not linear and thus distorts the problem.
Sorry I'm late, Anywhere this seems a good place to post my two (not quite) colloaries to the original post:
colloary 1: You can chose either a or b: a) All currently alive humans, including you, will be tortured with superhuman proficiency for a billion years, with certainty. b) There is a 1-in-1 000 000 risk (otherwise nothing happens) that 3^^^3 animals get dust specks in their eyes. These animals have mental attributes that makes them on average worth approximately 1/10^12 as much as a human- Further, the dust specks are so small only those with especia...
Some considerations:
A dust speck takes a second to remove from your eye. But it is sufficiently painful, unpleasant, or distracting that you will take that second to remove it from your eye, forsaking all other actions or thoughts for that one second. If a typical human today can expect to live for 75 years, then one second is a one-in-2.3-billion part of a life. And that part of that life is indeed taken away from that person; since they surely are not pursuing any other end for the second it takes to remove that dust speck. If all moments of life were co...
Alternative phrasing of the problem: do you prefer a certain chance of having a dust speck in your eye, or a one-in-3^^^3 chance of being tortured for 50 years?
When you consider that we take action to avoid minor incomforts, but that we don't always take action to avoid small risks of violence or rape etc., we make choices like that much pretty often, with higher chances of bad things happening.
An interesting related question would be: What would people in a big population Q choose if given alternatives: extreme pain with probability p=1/Q or tiny pain with probability p=1. In the framework of expected utility theory you'd have to include not only the sizes of the pains and size of populations but also the risk aversion of the person asked. So its not only about adding up small utilities.
Perhaps the answer is that there are multiple hierarchies of [dis]utility, for instance: n dust specks (where n is less than enough to permanently damage the eye or equate to a minimal pain unit) is hierarchy 1, a slap in the face is hierarchy 3, torture is hierarchy 50 (these numbers are just an arbitrary example) and the [dis]utility at hierarchy x+1 is infinitely worse than the [dis]utility at hierarchy x. Adding dust specks to more people won't increase the hierarchy, but adding more dust specks to the same person eventually will.
I just noticed this argument, I hope I'm not too late in expressing my view.
Premise: I want to live in the universe with the least amount of pain.
And now for some calculations. For the sake of quantification, let's assume that that the single tortured person will receive 1 whiplash per second, continuously, for 50 years. Let's also assume that the pain of 1 whiplash is equivalent to 1 "pain unit". Thus, if I chose to torture that person, I would add 3600 "pain units" per hour to the total amount of pain in the universe. In 1 day, the am...
One way to think about this is to focus on how small one person is compared to 3^^^3 people. You're unlikely to notice the dust speck each person feels, but you're much, much less likely to notice the one person being tortured against a background of 3^^^3 people. You could spend a trillion years searching at a rate of one galaxy per Planck time and you won't have any realistic chance of finding the person being tortured.
Of course, you noticed the person being tortured because they were mentioned in only a few paragraphs of text. It makes them more noticeable. It doesn't make them more important. Every individual is important. All 3^^^3 of them.
If Omega tells you that he will give either 1¢ each to 3^^^3 random people or $100,000,000,000.00 to the SIAI, and that you get to choose which course of action he should take, what would you do? That's a giant amount of distributed utility vs a (relatively) modest amount of concentrated utility.
I suspect that part of the exercise is not to outsmart yourself.
My utility function says SPECKS. I thought it was because it was rounding the badness of a dust speck down to zero.
But if I modify the problem to be 3^^^3 specks split amongst a million people and delivered to their eyes at a rate of one per second for the rest of their lives, it says TORTURE.
If the badness of specks add up when applied to a single person, then a single dust speck must have non-zero badness. Obviously, there's a bug in my utility function.
Today's post, Torture vs. Dust Specks was originally published on 30 October 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Motivated Stopping and Motivated Continuation, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.