One of our most controversial posts ever was "Torture vs. Dust Specks". Though I can't seem to find the reference, one of the more interesting uses of this dilemma was by a professor whose student said "I'm a utilitarian consequentialist", and the professor said "No you're not" and told them about SPECKS vs. TORTURE, and then the student - to the professor's surprise - chose TORTURE. (Yay student!)
In the spirit of always making these things worse, let me offer a dilemma that might have been more likely to unconvince the student - at least, as a consequentialist, I find the inevitable conclusion much harder to swallow.
I'll start by briefly introducing Parfit's Repugnant Conclusion, sort of a little brother to the main dilemma. Parfit starts with a world full of a million happy people - people with plenty of resources apiece. Next, Parfit says, let's introduce one more person who leads a life barely worth living - but since their life is worth living, adding this person must be a good thing. Now we redistribute the world's resources, making it fairer, which is also a good thing. Then we introduce another person, and another, until finally we've gone to a billion people whose lives are barely at subsistence level. And since (Parfit says) it's obviously better to have a million happy people than a billion people at subsistence level, we've gone in a circle and revealed inconsistent preferences.
My own analysis of the Repugnant Conclusion is that its apparent force comes from equivocating between senses of barely worth living. In order to voluntarily create a new person, what we need is a life that is worth celebrating or worth birthing, one that contains more good than ill and more happiness than sorrow - otherwise we should reject the step where we choose to birth that person. Once someone is alive, on the other hand, we're obliged to take care of them in a way that we wouldn't be obliged to create them in the first place - and they may choose not to commit suicide, even if their life contains more sorrow than happiness. If we would be saddened to hear the news that such a person existed, we shouldn't kill them, but we should not voluntarily create such a person in an otherwise happy world. So each time we voluntarily add another person to Parfit's world, we have a little celebration and say with honest joy "Whoopee!", not, "Damn, now it's too late to uncreate them."
And then the rest of the Repugnant Conclusion - that it's better to have a billion lives slightly worth celebrating, than a million lives very worth celebrating - is just "repugnant" because of standard scope insensitivity. The brain fails to multiply a billion small birth celebrations to end up with a larger total celebration of life than a million big celebrations. Alternatively, average utilitarians - I suspect I am one - may just reject the very first step, in which the average quality of life goes down.
But now we introduce the Repugnant Conclusion's big sister, the Lifespan Dilemma, which - at least in my own opinion - seems much worse.
To start with, suppose you have a 20% chance of dying in an hour, and an 80% chance of living for 1010,000,000,000 years -
Now I know what you're thinking, of course. You're thinking, "Well, 10^(10^10) years may sound like a long time, unimaginably vaster than the 10^15 years the universe has lasted so far, but it isn't much, really. I mean, most finite numbers are very much larger than that. The realms of math are infinite, the realms of novelty and knowledge are infinite, and Fun Theory argues that we'll never run out of fun. If I live for 1010,000,000,000 years and then die, then when I draw my last metaphorical breath - not that I'd still have anything like a human body after that amount of time, of course - I'll go out raging against the night, for a life so short compared to all the experiences I wish I could have had. You can't compare that to real immortality. As Greg Egan put it, immortality isn't living for a very long time and then dying. Immortality is just not dying, ever."
Well, I can't offer you real immortality - not in this dilemma, anyway. However, on behalf of my patron, Omega, who I believe is sometimes also known as Nyarlathotep, I'd like to make you a little offer.
If you pay me just one penny, I'll replace your 80% chance of living for 10^(10^10) years, with a 79.99992% chance of living 10^(10^(10^10)) years. That's 99.9999% of 80%, so I'm just shaving a tiny fraction 10-6 off your probability of survival, and in exchange, if you do survive, you'll survive - not ten times as long, my friend, but ten to the power of as long. And it goes without saying that you won't run out of memory (RAM) or other physical resources during that time. If you feel that the notion of "years" is ambiguous, let's just measure your lifespan in computing operations instead of years. Really there's not much of a difference when you're dealing with numbers like 10^(1010,000,000,000).
My friend - can I call you friend? - let me take a few moments to dwell on what a wonderful bargain I'm offering you. Exponentiation is a rare thing in gambles. Usually, you put $1,000 at risk for a chance at making $1,500, or some multiplicative factor like that. But when you exponentiate, you pay linearly and buy whole factors of 10 - buy them in wholesale quantities, my friend! We're talking here about 1010,000,000,000 factors of 10! If you could use $1,000 to buy a 99.9999% chance of making $10,000 - gaining a single factor of ten - why, that would be the greatest investment bargain in history, too good to be true, but the deal that Omega is offering you is far beyond that! If you started with $1, it takes a mere eight factors of ten to increase your wealth to $100,000,000. Three more factors of ten and you'd be the wealthiest person on Earth. Five more factors of ten beyond that and you'd own the Earth outright. How old is the universe? Ten factors-of-ten years. Just ten! How many quarks in the whole visible universe? Around eighty factors of ten, as far as anyone knows. And we're offering you here - why, not even ten billion factors of ten. Ten billion factors of ten is just what you started with! No, this is ten to the ten billionth power factors of ten.
Now, you may say that your utility isn't linear in lifespan, just like it isn't linear in money. But even if your utility is logarithmic in lifespan - a pessimistic assumption, surely; doesn't money decrease in value faster than life? - why, just the logarithm goes from 10,000,000,000 to 1010,000,000,000.
From a fun-theoretic standpoint, exponentiating seems like something that really should let you have Significantly More Fun. If you can afford to simulate a mind a quadrillion bits large, then you merely need 2^(1,000,000,000,000,000) times as much computing power - a quadrillion factors of 2 - to simulate all possible minds with a quadrillion binary degrees of freedom so defined. Exponentiation lets you completely explore the whole space of which you were previously a single point - and that's just if you use it for brute force. So going from a lifespan of 10^(10^10) to 10^(10^(10^10)) seems like it ought to be a significant improvement, from a fun-theoretic standpoint.
And Omega is offering you this special deal, not for a dollar, not for a dime, but one penny! That's right! Act now! Pay a penny and go from a 20% probability of dying in an hour and an 80% probability of living 1010,000,000,000 years, to a 20.00008% probability of dying in an hour and a 79.99992% probability of living 10^(1010,000,000,000) years! That's far more factors of ten in your lifespan than the number of quarks in the visible universe raised to the millionth power!
Is that a penny, friend? - thank you, thank you. But wait! There's another special offer, and you won't even have to pay a penny for this one - this one is free! That's right, I'm offering to exponentiate your lifespan again, to 10^(10^(1010,000,000,000)) years! Now, I'll have to multiply your probability of survival by 99.9999% again, but really, what's that compared to the nigh-incomprehensible increase in your expected lifespan?
Is that an avaricious light I see in your eyes? Then go for it! Take the deal! It's free!
(Some time later.)
My friend, I really don't understand your grumbles. At every step of the way, you seemed eager to take the deal. It's hardly my fault that you've ended up with... let's see... a probability of 1/101000 of living 10^^(2,302,360,800) years, and otherwise dying in an hour. Oh, the ^^? That's just a compact way of expressing tetration, or repeated exponentiation - it's really supposed to be Knuth up-arrows, ↑↑, but I prefer to just write ^^. So 10^^(2,302,360,800) means 10^(10^(10^...^10)) where the exponential tower of tens is 2,302,360,800 layers high.
But, tell you what - these deals are intended to be permanent, you know, but if you pay me another penny, I'll trade you your current gamble for an 80% probability of living 1010,000,000,000 years.
Why, thanks! I'm glad you've given me your two cents on the subject.
Hey, don't make that face! You've learned something about your own preferences, and that's the most valuable sort of information there is!
Anyway, I've just received telepathic word from Omega that I'm to offer you another bargain - hey! Don't run away until you've at least heard me out!
Okay, I know you're feeling sore. How's this to make up for it? Right now you've got an 80% probability of living 1010,000,000,000 years. But right now - for free - I'll replace that with an 80% probability (that's right, 80%) of living 10^^10 years, that's 10^10^10^10^10^10^10^1010,000,000,000 years.
See? I thought that'd wipe the frown from your face.
So right now you've got an 80% probability of living 10^^10 years. But if you give me a penny, I'll tetrate that sucker! That's right - your lifespan will go to 10^^(10^^10) years! That's an exponential tower (10^^10) tens high! You could write that as 10^^^3, by the way, if you're interested. Oh, and I'm afraid I'll have to multiply your survival probability by 99.99999999%.
What? What do you mean, no? The benefit here is vastly larger than the mere 10^^(2,302,360,800) years you bought previously, and you merely have to send your probability to 79.999999992% instead of 10-1000 to purchase it! Well, that and the penny, of course. If you turn down this offer, what does it say about that whole road you went down before? Think of how silly you'd look in retrospect! Come now, pettiness aside, this is the real world, wouldn't you rather have a 79.999999992% probability of living 10^^(10^^10) years than an 80% probability of living 10^^10 years? Those arrows suppress a lot of detail, as the saying goes! If you can't have Significantly More Fun with tetration, how can you possibly hope to have fun at all?
Hm? Why yes, that's right, I am going to offer to tetrate the lifespan and fraction the probability yet again... I was thinking of taking you down to a survival probability of 1/(10^^^20), or something like that... oh, don't make that face at me, if you want to refuse the whole garden path you've got to refuse some particular step along the way.
Wait! Come back! I have even faster-growing functions to show you! And I'll take even smaller slices off the probability each time! Come back!
...ahem.
While I feel that the Repugnant Conclusion has an obvious answer, and that SPECKS vs. TORTURE has an obvious answer, the Lifespan Dilemma actually confuses me - the more I demand answers of my mind, the stranger my intuitive responses get. How are yours?
Based on an argument by Wei Dai. Dai proposed a reductio of unbounded utility functions by (correctly) pointing out that an unbounded utility on lifespan implies willingness to trade an 80% probability of living some large number of years for a 1/(3^^^3) probability of living some sufficiently longer lifespan. I looked at this and realized that there existed an obvious garden path, which meant that denying the conclusion would create a preference reversal. Note also the relation to the St. Petersburg Paradox, although the Lifespan Dilemma requires only a finite number of steps to get us in trouble.
I think I've come up with a way to test the theory that the Repugnant Conclusion is repugnant because of scope insensitivity.
First we take the moral principle the RC is derived from, the Impersonal Total Principle (ITP); which states that all that matters is the total amount of utility* in the world, factors like how it is distributed and the personal identities of the people who exist are not morally relevant. Then we apply it to situations that involve small numbers of people, ideally one or two. If this principle ceases to generate unpleasant conclusions in these cases then it is likely only repugnant because of scope insensitivity. If it continues to generate unpleasant conclusions then it is likely the moral principle itself is broken, in which case we should discard it and find another one.
As it turns out there is an unpleasant conclusion that the TP generates on a small scale involving only two people. And it's not just something I personally find unpleasant, it's something that most people find unpleasant, and which has generated a tremendous amount of literature in the field of moral philosophy. I am talking, of course, about Replaceability. Most people find the idea that it is morally acceptable to kill (or otherwise inflict a large disutility upon) someone if doing so allows you to create someone whose life will have the same, or slightly more, utility as the remaining lifespan of the previous person, to be totally evil. Prominent philosophers such as Peter Singer have argued against Replaceability.
And I don't think that the utter horribleness of Replaceability is caused by an aversion to violence, like the Fat Man variant of the Trolley Problem. This is because the intuitions don't seem to change if we "naturalize" the events (that is, we change the scenario so that the same consequences are caused by natural events that are no one's fault). If a natural disaster killed someone, and simultaneously made it possible to replace them, it seems like that would be just as bad as if they were deliberately killed and replaced.
Since the Impersonal Total Principle generates repugnant conclusions in both large and small scale scenarios, Repugnant Conclusion's repugnance is probably not caused by scope insensitivity. The ITC itself is probably broken, we should just throw it out.** Maximizing total utility without regard to any other factors is not what morality is all about.
*Please note that when I refer to "utility" I am not referring to Von Neumann-Morgenstern utility. I am referring to "what utilitarianism seeks to maximize," i.e positive experiences, happiness, welfare, E-utility, etc.
**Please note that my rejection of the ITC does not mean I endorse the rival Impersonal Average Principle, which is equally problematic.