EDIT: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

Consider the following choices:

  • You must choose between torturing someone for 50 years, or torturing 3^^^3 people for a millisecond each (yes, it's a more symmetric variant on the dust-specks vs torture problem).
  • You must choose between creating someone who will be tortured for 50 years, or creating 3^^^3 people who will each get tortured for a millisecond each.

Some people might feel that these two choices are the same. There are some key differences between them, however - and not only because the second choice seems more underspecified than the first. The difference is the effect of aggregation - of facing the same choice again and again and again. And again...

There are roughly 1.6 billion seconds in 50 years (hence 1.6 trillion milliseconds in 50 years). Assume a fixed population of 3^^^3 people, and assume that you were going to face the first choice 1.6 trillion times (in each case, the person to be tortured is assigned randomly and independently). Then choosing "50 years" each time results in 1.6 trillion people getting tortured for 50 years (the chance of the same person being chosen to be tortured twice is of the order of 50/3^^^3 - closer to zero than most people can imagine). Choosing "a millisecond" each time results in 3^^^3 people, each getting tortured for (slightly more than) 50 years.

The choice there is clear: pick "50 years". Now, you could argue that your decision should change based on how often you (or people like you) expects to face the same choice, and assumes a fixed population of size 3^^^3, but there is a strong intuitive case to be made that the 50 years of torture is the way to go.

Compare with the second choice now. Choosing "50 years" 1.6 trillion times results in the creation of 1.6 trillion people who get tortured for 50 years. The "a millisecond" choice results in 1.6 trillion times 3^^^3 people being created, each tortured for a millisecond. Conditional on what the rest of the life of these people is like, many people (including me) would feel the "a millisecond" option is much better.

As far as I can tell (please do post suggestions), there is no way of aggregating impacts on potential people you are creating, in the same way that you can aggregate impacts on existing people (of course, you can first create potential people, then add impacts to them - or add impacts that will affect them when they get created - but this isn't the same thing). Thus the two situations seem justifiably different, and there is no strong reason to assign the intuitions of the first case to the second.

New to LessWrong?

New Comment
21 comments, sorted by Click to highlight new comments since: Today at 1:49 PM

I think they're very different, and I don't think this is due to aggregation.

In the first case, the major difference between the option is the suffering caused. In the second case, the major difference between the two cases is the lives created. The suffering caused seems a very small side-effect in comparison.

In order to drive an intuition that the two cases are the same, you'd have to think that it was exactly neutral to create a person (before the millisecond of torture). This strikes me as highly implausible on any kind of consequentialist position. Even non-consequentialist positions have to have some view of what's appropriate here -- and while you can reasonably get creating the people to be incommensurably good with not doing so, it's similarly implausible to get them to be exactly as good as each other.

In the second case, the major difference between the two cases is the lives created. The suffering caused seems a very small side-effect in comparison.

This is exactly what I was thinking. In the second case (in which people are created), are we to assume that 3^^^3 people are created either way? Also, will they exist only so long as they are being tortured, or will they exist across lifespans which include time during which they are not being tortured?

Thanks for strengthening my case ^_^

I was trying to demonstrate that the argument for total utilitarianism among existing populations was stronger than for potential populations. I could have mentioned this aspect - I vaguely referred to it in "and not only because the second choice seems more underspecified than the first." But I thought that would be more contentious and debatable, and so focused on the clearest distinction I saw.

Sorry, I don't see the force of your argument here. Because my intuition about the scenarios is dominated by the effect of creating people, which we certainly wouldn't expect to be zero for total utilitarianism, I can't see whether there should be any distinction for aggregation.

Would you be happy changing the second scenario so that we create 3^^^3 people in either case, as AABoyles suggested? If we did that total utilitarianism would say that we should treat it the same as the first case (but my intuition also says this). Or if not that, can you construct another example to factor out the life-creation aspect which is driving most of the replies?

? I don't see your point. You can use aggregation as an argument to be more utilitarian when not creating extra people. But you can't use it when creating lives, as you point out. So the argument is unavailable in this context.

That's the whole point of the post, which I seem to have failed to make clear. Aggregation arguments are available for already created lives, not for the new creation of them.

Yeah, it definitely seems like we're talking past each other here. I think I don't understand what you mean by "aggregation" -- I have a different impression from this comment than from the opening post. Perhaps you can clarify that?

Not sure if this is relevant: From a utilitarian point of view I think you can aggregate when creating lives, but of course the counterfactuals you'll use will change (as mostly what you're trying to work out is how good creating a life is).

Let me try and be careful and clear here.

What I meant by "aggregation" is that when we have to choose between X and Y once, we may have unclear intuitions, but if we have to choose between X and Y multiple times (given certain conditions), the choice is clear (and is Y, for example).

There are two intuitive examples of this. The first is when X causes a definite harm and Y causes a probability of harm, as in http://lesswrong.com/lw/1d5/expected_utility_without_the_independence_axiom/ . The second is the example I gave here, where X causes harm to a small group while Y causes smaller harm to a larger group.

Now, the "certain conditions" can be restrictive (here it is applied repeatedly to a fixed population). I see these aggregation arguments as providing at least some intuitive weight to the idea that Y>X even in the one-shot case. However, as far as I can tell, this aggregation argument (or anything similar) is not available for creating populations. Or do you see an analogous idea?

I'm not sure how much it matters, but I'd guess that some people who choose "dust specks" over "torture" would change their position if it were "1ms of torture" rather than "a dust speck in the eye". (Or would, actually maybe reasonably, refuse to answer on the grounds that "1ms of torture" is not meaningful given how the human nervous system works.)

Perhaps more to the point, if you're creating people then only a tiny fraction of the result will be in the 1ms of torture they each get, and surely this fact is completely going to swamp the effects you're interested in; this isn't a problem of aggregation as such at all, is it?

this isn't a problem of aggregation as such at all, is it?

But it is. You can choose lower/higher numbers if you want. But the central point is that one problem aggregates (or can aggregate) in particular way, while the other doesn't.

For the reason I, and owencb, and Unknowns, and peter_hurford, have all given, there is a very big difference between the scenarios that at least on the face of it has nothing to do with aggregation.

Differences related specifically to aggregation may also be relevant, but I don't think this can be the right example to illustrate this because what it mostly illustrates is that for most of us a whole human life has a lot more moral weight than one millisecond of torture (assuming, again, that "one millisecond of torture" actually denotes anything meaningful).

You might want to consider either finding a different example, or explaining why it's a good example after all in some more convincing way than just saying "But it is".

See my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong.

A key difference is that when you're creating people, you're creating all their experiences in addition to the micro-torture, so if we expect those new lives to be good on balance, that's a net gain rather than a loss. So you'd prefer to create the 3^^^^3 people with micro-torture.

However, when you're not creating, you're just adding micro-torture, that's definitely a net loss. Turns out that the sum of the micro-losses is larger than the larger loss of 50 years.

Thus, the asymmetry.

From my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

This may be an unrelated question, but I've seen a lot of similar exercises here—is the general implication that:

1 Person tortured for 1 year = 365 people tortured for 1 day = 8760 people tortured for 1 hour = 525600 people tortured for 1 minute?

Agreeing with shminux above, elaborating a little... there's a general agreement that marginal utility changes aren't linear with changes in the thing being measured. How much I value a hundred dollars depends on how much money I have; how much I antivalue a minute of torture depends on how long I've already been tortured.

So I expect that very few people here will claim that 1 person getting a million dollars has the same aggregate utility as a million people getting a dollar each, or that 1 person tortured for a year has the same aggregate antiutility as half a million people tortured for a minute.

One reason the Torture vs Dust Specks story uses such huge numbers is to avoid having to worry about that.

It's not the exact numbers that matter, it'as the (transitivity) assumption that they exist. Whether 1 Person tortured for 1 year = 525600 people tortured for 1 minute or 10000000000 people tortured for 1 minute is immaterial to the conclusion.

The other comments are correct. This does not mean that the goodness or badness involved in creating people does not add together. It simply means that when you create someone, you need to take into account the fact that you have created a person, not only the torture, while in the first case you have to take into account only the torture, since the people are already there.

As I said in my edit: the purpose of this post is simply to show that there is a difference between certain reasoning for already existing and potential people. I don't argue that aggregation is the only difference, nor (in this post) that total utilitarianism for potential people is wrong. Simply that the case for existing people is stronger than for potential people.

[-][anonymous]9y00

As far as I can tell (please do post suggestions), there is no way of aggregating impacts on potential people you are creating.

Never-created humans have no experiences. Created humans have some experiences. Suffering is an experience. Created humans suffer more than never-created humans - infinitely more so. Whether the experiences of not-suffering outweigh the suffering in created humans is a different topic.

Dust speckers will say that you should choose differently when you know that you will have to choose 1.6trillion times.

Yes, that's the point. But it doesn't seem there's any pressure to choose differently when creating people.