I was reading the original comment thread on Torture vs. Dust Specks, and notice Eliezer saying he wouldn't pay a penny to avoid a single dust speck - which confused me, until I noticed that the original specification of the problem says the dust speck "floated into your eye and irritated it just a little, for a fraction of a second, barely enough to make you notice before you blink and wipe away the dust speck." I guess I blanked that out when I first read the post. My default visualization when I imagine "dust speck in my eye" is something substantially more annoying than that.

This leads me to wonder if people would have responded differently if instead of going out of his way to make the alternative to torture involve something as trivial-sounding as possible, Eliezer had gone for some merely minor mishap - say, getting shampoo in your eye. After all, lots of us have gotten shampoo in our eyes at least once (maybe when we were kids), and it's easy to imagine paying $2.99 for a bottle of won't-irritate-your-eyes shampoo over an otherwise identical $2.98 bottle that will hurt if you get it in your eyes (or your kid's eyes, if you're totally confident that, as an adult, you'll be able to keep shampoo out of your eyes).

From there, it's easy to argue that if you're honest with yourself you wouldn't pay $(3^^^3/100) to save one person from being tortured for 50 years, so you should choose one person getting tortured for 50 years over 3^^^3 people getting shampoo (the stingy kind) in their eyes. I suppose, however, that might not change your answer to torture vs. specks if you think there's a qualitative difference between the speck (as originally specified by Eliezer) and getting shampoo in your eye.

New Comment
43 comments, sorted by Click to highlight new comments since:

I really don't think we should talk about or link to that article anymore. Patching it won't change the fact that we've anchored the discussion to dust specks. If your goal is to persuade more people to shut up and multiply (and thereby do a little to decrease the actual amount of expected torture and suffering in the real world), surely there are ways to do that that aren't quite so saturated with mind-kill. Just write a brand new post, modeled after Eliezer's — 'Cancer or hiccups?', say — that avoids...

  1. ... strange and poorly understood examples. (I'm still not sure I get what this tiny 'dust speck' experience is supposed to feel like.)
  2. ... examples that emphasize agency (and therefore encourage deontological responses), as opposed to worldly misfortunes.
  3. ... examples that are not just emotionally volatile horrors, but are actively debated hot-button political issues.
  4. ... numbers that are difficult not just for laypeople to intuitively grasp, but even to notationally unpack.

Such elements aren't even useful for neutral illustration. This is among the most important rhetorical battles the LW and EA communities have ever addressed, and among the most challenging. Why go out of our way to handicap ourselves? Just use an unrelated example.

I disagree. I think Chris made the example more clear (using shampoo) and the argument more convincing for me ("you wouldn't pay $(3^^^3/100).")

"you wouldn't pay $(3^^^3/100)."

I think that particular number makes the argument harder to understand, since I'm not sure what it would even mean to pay such an amount. Should I postulate that we have colonized the entire observable universe and I'm the supreme ruler of the universe, thereby allowing for the possibility of such a sum even existing, let alone me having the opportunity to pay it?

Should I postulate that we have colonized the entire observable universe and I'm the supreme ruler of the universe, thereby allowing for the possibility of such a sum even existing...

Not to beat a dead horse, but $(3^^^3/100) is way way way way way way way way way more money than you could spend if you merely colonized the universe with trillionaires the size of quarks.

I thought of it like this: Well, I wouldn't even pay one trillion dollars, so surely I wouldn't pay $(3^^^3/100). Given that paying $(3^^^3/100) is the logical consequence of choosing "torture" in "torture vs. shampoo", I clearly must not pick "torture".

Yeah, I did arrive at something similar after a moment's thought, but an example that the reader needs to explicitly transform into a different one ("okay, that number makes no sense, but the same logic applies to smaller numbers that do make sense") isn't a very good example.

I think I tend to do things like that automatically, so it wasn't a problem for me. But I can see why that would be a problem to other people who think differently, so I agree with you.

What number would you recommend?

How do you disagree? I agree on both of those counts.

I'm suggesting 'Shampoo in eyes vs. being slowly devoured alive by ants' would be even more convincing to most people, especially if you used a dollar figure whose notation most people understand.

On second thought, I don't disagree.

These are good points, though as a counter-point to your original post, torture vs. specks is one of the things that comes up on Google when you search for Eliezer's name, so we may be stuck with it.

Going with the thought anyway... aside from getting people to not spend a million dollars to save one life (i.e. making sure their million dollars saves at least a few hundred lives), where are other good problem areas to focus on for the sake of practical improvements in people's ability to shut up and multiply? "Yes, it really is OK to accept an increased risk of terrorist attack for the sake of making air travel more convenient"?

Actually, given how crazy the US went after 9/11, I'm not sure that's the best example. A little inconvenience in our air travel is a reasonable price to pay for avoiding another Iraq war. This doesn't totally ruin the example, because there's some level of inconvenience we would not accept in order to avoid another Iraq, but that level is high enough to not make the example work so well.

Hmmm... better examples?

This reminds me of the Piraha ( http://en.wikipedia.org/wiki/Pirah%C3%A3_people ) who are so well-adapted to the live in the jungle (not only by their knowledge and experience but by their language and culture) that they can't even grasp the concept of nuisance due to e.g. bugs. For them bugs are such a normal part of live that the can't even express this. Note that their language lacks abstract concepts but feature rich ways to express concrete experiences and desires. So for them the dust spec/shampoo/bug bite cost is not only zero but even unknowable. And on the other hand the future is of no consequence for them either.

Why do I mention this? Because in a way we (or most of us) are in their position with respect to our knowledge relative to a super intelligence or a very distant future. And because it may be that the cost function is not linear over such wide ranges.Linear cost assumes an unbounded universe - and 3^^^3 is way out of the limit of our universe.

[-][anonymous]70

Not to beat on a dead horse, but these facts about the Piraha are not very well established. I'd link to my other comment regarding the Piraha, but currently I'm on my phone. In any case, note that all observation preceding the creation of a grade school for the Piraha was done by the same anthropologist.

My point rests less on the semantic structure of the Piraha language (which may or may not be what Everett says) than on the overall picture of that tribe. I chose it for it being well-known and extreme.

Surely you will not dispute the general statements about that tribe (which has been studied and documented by other teams too). I don't think those social, cultural and crafts traits could be hidden from Everett for years and years where he lived there with his family. He might exaggerate them though.

[-][anonymous]10

That's a fair point. You're right.

I actually think that there is a fundamental problem with initial question that makes the answer different from the one Eliezer thinks.

If "shut up and multiply' was always the right answer, if you could break everything down to utilitons and go with whatever provides the most utility for the most people, then the logical thing to do would be to create a trillion intelligent beings in the solar system who do nothing but enjoy pleasure all the time. But he's made it clear that he doesn't believe that, that he believes that humans have a wide variety of values that are irreducible, that are fundamentally different, and that if you give up some of those values to satisfy others, you lose something important, and we all end up worse off. When it comes to fundamentally different values, like "freedom", that you can't just say "X freedom is worth Y pleasure, so if you can create Y+1 pleasure you should give up on the concept of freedom", not for any value of X and Y.

In this case, I would argue that the value "nobody should suffer from torture if it can be avoided" is a fundamentally different value from "it's nice to avoid minor annoyances when possible", and that you can't simply break them down into utilitons and "do the math" like he suggests.

You're conflating "utility" and "pleasure". The aggregation function to use for multiple people is a question of some concern, but I think most of us would agree that maximizing utility is more complex than just wireheading.

Yes, that was my exact point.

Is there a positive utility value to pleasure? Obviously there is. But can you just "shut up and multiply" that, figure out that you get more utility from wireheading then from anything else, that and then decide that that's the right choice? Obviously not, because human value is complex, and you can't just give up one value to create utility somewhere else, not even for arbitrarily large amounts of pleasure.

I am arguing that the same is true for things with negative utility. Is there a negative utility value to a minor annoyance? Sure. But that doesn't mean that we should give up other values, like "torturing people is wrong" in order to avoid annoyances, not even for arbitrarily large values of annoyances.

It seems to me we make value tradeoffs all the time, based on how much we value different things at that time.

I value living. I also value happiness. Given a choice between a shorter, happier life and a longer, less-happy life, I would choose one or the other. Either way, I'm trading one value off for another. You seem to be saying that I can't do that. I don't see how that claim even makes sense. If I can't choose one or the other, what can I do?

Now, if you want to say that you happen to value avoiding torture in such a way that you won't trade any amount of minor-annoyance-avoiding for any amount of torture... that your preferences are lexicographic in this regard... well, then yes, your values are significantly different from what Eliezer is talking about, and you (specifically you) can't "do the math".

But it sounds like you want to say that everyone is like that. What makes you think that?

I think that most people's values are set up like that, which is why most people have such a strong negative instinctive reaction to the whole "dust motes vs torture" question. Perhaps not, though; I obviously can't speak for everyone.

If you had to choose between being tortured horribly for the next 50 years, or having a minor annoyance happen to you once a year for the next (insert arbitrarily large number here) years, which would you pick?

Given that choice, I'd pick having a minor annoyance happen to me once a year for the next (insert arbitrarily large number here) years.

Are you so sure about that? If the number is large enough, it's easily conceivable that at some point in your effectively infinite lifespan, you will come across a situation where that minor annoyance changes what might have been a perfectly good 50 years into 50 years of hell. So either you've wound up with mild scale insensitivity, a significant discount rate, or a rejection of additivity of negends.

I agree with you entirely.

I'm not saying I'd be better off having picked it. For the vast majority of numbers, I absolutely would not be. [EDIT: Well, assuming no knock-on effects from the torture, which EY's initial formulation assumed.]

I'm saying it's probably what I would, in fact, pick, if I were somehow in the epistemic state of being offered that choice. Yes, scale insensitivity and discounting play a role here, as does my confidence that I'm actually being offered an arbitrarily large number of annual minor annoyances (and the associated years of life).

Of course, it depends somewhat on the framing of the question. For example, if you tortured me for half an hour and said "OK, I can either keep doing that for the next 50 years, or I can stop doing that and annually annoy you mildly for the rest of your immortal life," I would definitely choose the latter. (Really, I'd probably agree to anything that included stopping the torture and didn't violate some sacred value of mine, and quite likely I'd agree to most things that did violate my sacred values. Pain is like that.)

Yosarian2 keeps framing the question in terms of "what would you choose?" rather than "what would leave you better off?", and then responding to selections of torture (which make sense in EY's framing) with incredulity that anyone would actually choose torture.

At some point, fighting over the framing of the problem isn't worth my time: f they insist on asking a (relatively trivial) epistemic question about my choices, and insist on ignoring the (more interesting) question of what would leave me better off, at some point I just decide to answer the question they asked and be done with it.

This is similar to my response to many trolley questions: faced with that choice, what I would actually do is probably hesitate ineffectually, allowing the 5 people to die. But the more interesting question is what I believe I ought to do.

Well... ideally, what you choose is what would leave you better off, and is chosen with this in mind. What you do ought to be what you ought to do, and what you ought to do ought to be what you do. Anything out of line with this either damages you unnecessarily or acknowledges that the you that might have desired the ought-choice is dead.

Yes, ideally, I would choose what would leave me better off, and what I do ought to be what I ought to do, and what I ought to do ought to be what I do. Also, I ought to do what I ought to do, and various other formulations of this thought. And yes, not doing what I ought to do has negative consequences relative to doing what I ought to do, which is precisely what makes what I ought to do what I ought to do in the first place.

the you that might have desired the ought-choice is dead.

This, on the other hand, makes no sense to me at all.

Basically: if you're not doing what you think you should be doing, you're either screwing yourself or you're not who you think you are.

Ah, I see. I think. I wouldn't call that being dead, personally, but I can see why you do. I think.

I'm also the sort of person who believes he has been dead for billions of years. Basically - if someone exists at some point and does not at another, they have died. We change over time; we throw off a chain of subtly different dead selves.

Right, that's what I figured you were using "dead" to mean.

(nods) Right. I think most people would.

What that says to me is that most of us would not trade any amount of "occasional minor annoyances" for any amount of "horrific torture". The two things might both be "bad", but that doesn't mean that X annoyances=Y torture, any more then (for positive values) X pleasure=Y freedom. (The initial problem helped disguise that by implying that the torture was going to happen to someone else, but in utilitarian terms that shouldn't matter.)

When you can point to situations where "shutting up and multiplying" doesn't seem to fit our actual values, then perhaps the simplistic kind of utilitarian moral system that allows you to do that kind of math is simply not a good description of our actual value system, at least not in all cases.

So, consider the general case of an ordered pair (X,Y) such that given a choice between X right now, and Y once a year for the next (insert arbitrarily large number here) years, most people would probably choose Y.

Where (X="50 years of horrific torture", Y= "a minor annoyance"), your reasoning leads you to conclude that most of us would not accept any amount of X in exchange for any amount of Y.

Where (X="spending fifty thousand dollars", Y="spending a dollar"), would you similarly conclude that most of us would not accept any amount of fifty thousand dollars in exchange for any amount of dollars? I hope not, because that's clearly false.

I conclude that your reasoning is not quite right.

All of that said, I agree that a simple utilitarian moral system doesn't properly describe our actual value system in all cases.

The difference is that "dollars" and "fifty thousand dollars" are obviously equivalent and interchangeable units. 50,000 dollars equals one "fifty thousand dollars", obviously.

I don't think that any amount of "occasional minor annoyances" are equivalent or interchangeable with "a long period of horrific torture". They aren't even in the same catorgy, IMHO.

So, consider the general case of an ordered pair (X,Y) such that given a choice between X right now, and Y once a year for the next (insert arbitrarily large number here) years, most people would probably choose Y.

I think the time order here (torture now or annoyances later) may be another factor that is distracting us from the point, so let's drop that.

Let's say that you know, for a fact, that you will live for the next billion years. Now say that you have to choose between either having a very minor annoyance once a year for the next billion years, or instead having 50 years of horrific torture happen to you at some random point in the future within the next billion years. Personally, I would still choose the annoyances, rather then put my future-self through that horrific torture.

The difference is that "dollars" and "fifty thousand dollars" are obviously equivalent and interchangeable units.

Yes, I agree that the equivalence is far more obvious in this example.

I don't think that any amount of "occasional minor annoyances" are equivalent or interchangeable with "a long period of horrific torture".

You are welcome to believe that. It doesn't follow from the premise you seemed earlier to be concluding it from, though. If you're simply asserting it, that's fine.

And, yes, given the choice you describe, I would probably make the same choice.

Really? Would you pay $10 billion to save one person from being tortured, assuming a remotely normal range of other things to spend the money on?

We're comparing two values specifically here; torture vs minor annoyances. In that case, then yes, I would rather spend X money preventing someone from being tortured then spend the same X money protecting people from getting dust in their eye. The values, IHMO, are on fundamentally different levels.

Now, if it was "torture vs. saving lives", or something like that, then maybe you could do the math.

It's basically the same argument as the classic "how many people have to enjoy watching a gladiatorial combat on TV in order to morally justify forcing two people to fight for the death?" I don't think you can do that math, because the values (enjoying watching a tv show vs. human life) are on fundamentally different levels.

If "shut up and multiply' was always the right answer, if you could break everything down to utilitons and go with whatever provides the most utility for the most people, then the logical thing to do would be to create a trillion intelligent beings in the solar system who do nothing but enjoy pleasure all the time

Yes, that's the right answer if you find the notion of one intelligent being who does nothing but wallow in pleasure as a net gain and you agree that there aren't diminishing returns on creating such beings, and this doesn't divert resources from any more important values.

But if you find the notion of even one such being repulsive or neutral, then you're "shutting up and multiplying" over a non-positive number. In decision theory, utility quantifies how much you personally prefer certain scenarios, not pleasure.

I think we're saying the same thing. Pleasure undoubtedly has some positive utility (or else we wouldn't seek it out). But, like you said, you are diverting resources from "more important values". That was, in a sense, the whole point of Eliezer's story about the "Superhappys".

So, by the same token, if we think "nobody should be tortured" is a more important value then "avoiding small amounts of annoyance", then we should not sacrifice one for the other, not even for very large values of "avoiding annoyances".

The only difference is that it's more obvious when we're talking about positive values (like pleasure) then when you're talking about negative values (like avoiding someone being tortured).

The thing is, you should be able to trade off between your values, and that requires a currency of some sort, even if it's only implicit.

[-][anonymous]20

On rereview of torture vs. specks/shampoo, I think I may see something noteworthy, which is there are multiple separate problems here and I haven't been considering all of them.

Problem 1: Given a large enough stake on the other end, a seemingly sacred value(not torturing people) isn't sacred.

Example: the last time I did the math on this, I think I calculated that the trade point was roughly around the quintillions range. That was roughly the point where I thought it might be better to just torture the one person than have quintillions of people suffer that inconvenience, because the inconvenience, multiplied by 1 quintillion (10^18), was approximately as bad as the torture, when I tried to measure both. (The specific number isn't as critical, just the rough order of magnitude, and I want to note that was assuming near 100% certainty that the threat was real.)

I think Problem 1 is the standard way to evaluate this. But there is also this:

Problem 2: Given a large enough stake on the other end, you need to reevaluate what is going on because you can't handle threats of that caliber as threats.

Ergo, If you try to actually establish how 3^^^3 trivial inconveniences has meaning, you'll generally fail miserably. You might end up saying: "You could turn the all possible histories of all possible universal branches to shampoo from the beginning of time until the projected end of time, and you STILL wouldn't have enough shampoo to actually do that, so what does that threat even mean?"

So to evaluate that, you need to temporarily make a variety of changes to how you process things, just to resolve how a threat of that level even makes sense to determine if you comply.

Problem 2 comes up sometimes, but there is also:

Problem 3: Given a sufficiently large threat, the threat itself is actually an attack, not just a threat.

For instance, someone could run the following code to print a threat to a terminal:

10: Print: "If you don't press this button to torture this person for 50 years, I'm going to give the following number of people a trivial inconvenience:"

20: Print: "3, large number of Knuth's up arrows 3, where the large number of Knuth's up arrow can be defined as:"

30: Wait 1 second.

40: If person hasn't become unable to push button and the person hasn't pushed the button, Goto 20

50: Else: Do stuff.

And some particularly simple threat evaluation code will see that threat and just hang, waiting for the threat to finish printing before deciding whether or not to press the button.

So we have:

A: Threats too small to cause you to act. (Example, 1 person get's shampoo in their eyes)

B: Threats large enough to cause you to act. (Example, a plausible chance that 10^20 people get shampoo in their eyes)

C: Threats so large they do not appear to be possible based in how you understand reality. (Example, 3^^^3 people get shampoo in their eyes), so you have to potentially reevaluate everything to process the threat.

D: Threats so large that the threat itself should actually be treated as malicious/broken, because you will never finish resolving the threat size without just setting it to infinite.

So in addition to considering whether a threat is Threat A or B, (Problem 1) it seems like I would also need to consider if it was C or D. (Problem 2 and 3)

Is that accurate?

Indeed, I make this same point every time I notice SPECKS being discussed, that the example of a trivial pain is really badly chosen.

I think Eliezer was going for something even more trival than getting shampoo in your eyes. I think I would pay 1 cent for avoiding shampoo in my eyes.

Yes, noticing that was explicitly my reason for making this post.

One thing I would note is that one motivation I have for not spending that much money is that I could spend it on, oh, donating to AMF or MIRI. That option wasn't in the original, though.