Let's imagine a life extension drug has been discovered.  One dose of this drug extends one's life by 49.99 years.  This drug also has a mild cumulative effect, if it has been given to someone who has been dosed with it before it will extend their life by 50 years.

Under these constraints the most efficient way to maximize the amount of life extension this drug can produce is to give every dose to one individual.  If there was one dose available for all seven-billion people alive on Earth then giving every person one dose would result in a total of 349,930,000,000 years of life gained.  If one person was given all the doses a total of 349,999,999,999.99 years of life would be gained.  Sharing the life extension drug equally would result in a net loss of almost 70 million years of life.  If you're concerned about people's reaction to this policy then we could make it a big lottery, where every person on Earth gets a chance to gamble their dose for a chance at all of them.

Now, one could make certain moral arguments in favor of sharing the drug.  I'll get to those later.  However, it seems to me that gambling your dose for a chance at all of them isn't rational from a purely self-interested point of view either.  You will not win the lottery.  Your chances of winning this particular lottery are almost 7,000 times worse than your chances of winning the powerball jackpot.  If someone gave me a dose of the drug, and then offered me a chance to gamble in this lottery, I'd accuse them of Pascal's mugging.

Here's an even scarier thought experiment.  Imagine we invent the technology for whole brain emulation.  Let "x" equal the amount of resources it takes to sustain a WBE through 100 years of life.  Let's imagine that with this particular type of technology, it costs 10x to convert a human into a WBE and it costs 100x to sustain a biological human through the course of their natural life.  Let's have the cost of making multiple copies of a WBE once they have been converted be close to 0.

Again, under these constraints it seems like the most effective way to maximize the amount of life extension done is to convert one person into a WBE, then kill everyone else and use the resources that were sustaining them to make more WBEs, or extend the life of more WBEs.  Again, if we are concerned about people's reaction to this policy we could make it a lottery.  And again, if I was given a chance to play in this lottery I would turn it down and consider it a form of Pascal's mugging.

I'm sure that most readers, like myself, would find these policies very objectionable.  However, I have trouble finding objections to them from the perspective of classical utilitarianism.  Indeed, most people have probably noticed that these scenarios are very similar to Nozick's "utility monster" thought experiment.  I have made a list of possible objections to these scenarios that I have been considering:

1. First, let's deal with the unsatisfying practical objections.  In the case of the drug example, it seems likely that a more efficient form of life extension will likely be developed in the future.  In that case it would be better to give everyone the drug to sustain them until that time.  However, this objection, like most practical ones, seems unsatisfying.  It seems like there are strong moral objections to not sharing the drug.

Another pragmatic objection is that, in the case of the drug scenario, the lucky winner of the lottery might miss their friends and relatives who have died.  And in the WBE scenario it seems like the lottery winner might get lonely being the only person on Earth.  But again, this is unsatisfying.  If the lottery winner were allowed to share their winnings with their immediate social circle, or if they were a sociopathic loner who cared nothing for others, it still seems bad that they end up killing everyone else on Earth.   

2. One could use the classic utilitarian argument in favor of equality: diminishing marginal utility.  However, I don't think this works.  Humans don't seem to experience diminishing returns from lifespan in the same way they do from wealth.  It's absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30 (all other things being equal).  The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it.  Extended lifespan removes this restriction, making a longer-lived person essentially a utility monster.

3. My intuitions about the lottery could be mistaken.  It seems to me that if I was offered the possibility of gambling my dose of life extension drug with just one other person, I still wouldn't do it.  If I understand probabilities correctly, then gambling for a chance at living either 0 or 99.99 additional years is equivalent to having a certainty of an additional 49.995  years of life, which is better than the certainty of 49.99 years of life I'd have if I didn't make the gamble.  But I still wouldn't do it, partly because I'd be afraid I'd lose and partly because I wouldn't want to kill the person I was gambling with. 

So maybe my horror at these scenarios is driven by that same hesitancy.  Maybe I just don't understand the probabilities right.  But even if that is the case, even if it is rational for me to gamble my dose with just one other person, it doesn't seem like the gambling would scale.  I will not win the "lifetime lottery."

4. Finally, we have those moral objections I mentioned earlier.  Utilitarianism is a pretty awesome moral theory under most circumstances.  However, when it is applied to scenarios involving population growth and scenarios where one individual is vastly better at converting resources into utility than their fellows, it tends to produce very scary results.  If we accept the complexity of value thesis (and I think we should), this suggests that there are other moral values that are not salient in the "special case" of scenarios with no population growth or utility monsters, but become relevant in scenarios where there are.

For instance, it may be that prioritarianism is better than pure utilitarianism, and in this case sharing the life extension method might be best because of the benefits it accords the least off.  Or it may be (in the case of the WBE example) that having a large number of unique, worthwhile lives in the world is valuable because it produces experiences like love, friendship, and diversity. 

My tentative guess at the moment is that there probably are some other moral values that make the scenarios I described morally suboptimal, even though they seem to make sense from a utilitarian perspective.  However, I'm interested in what other people think.  Maybe I'm missing something really obvious.

 

EDIT:  To make it clear, when I refer to "amount of years added" I am assuming for simplicity's sake that all the years added are years that the person whose life is being extended wants to live and contain a large amount of positive experiences. I'm not saying that lifespan is exactly equivalent to utility. The problem I am trying to resolve is that it seems like the scenarios I've described seem to maximize the number of positive events it is possible for the people in the scenario to experience, even though they involve killing the majority of people involved.  I'm not sure "positive experiences" is exactly equivalent to "utility" either, but it's likely a much closer match than lifespan.

New Comment
38 comments, sorted by Click to highlight new comments since: Today at 8:22 AM

The problem: simplified measuring system. Any time you privilege exactly one of the things humans care about over all the others you get confusing or weird results.

An example: A drug that makes you live 1000 years longer, but makes you about as intelligent as a cow. I would not take this drug, nor would I expect most people to, but that's baffling if you simply consider it in terms of "years of life gained".

The number that utilitarians try to make go up is a lot more complicated than any one factor.

That's a very good point. I assumed that, for simplicity's sake, when discussing years added, that they would all be, on average, worthwhile years that the person whose life is being extended would want to live. So actually, rather than just "amount of years," I should have said "amount of years that contain worthwhile experiences." It seems to me that that is a pretty close approximation to the number utilitarians are trying to make go up.

I'll edit the OP to make that clearer.

[-]TimS11y90

It seems to me that [amount of years that contain worthwhile experiences] is a pretty close approximation to the number utilitarians are trying to make go up.

I'm highly doubtful. I can easily imagine a utilitarian dying to save their child or fighting in a war. If your model of utilitarianism doesn't allow that or concludes that real people in real circumstances are that badly miscalibrated, that's a problem with your model.

That's true, but it can be explained by the belief that losing a war will reduce the amount of positive experiences in their lives, and that they would rather have their child have positive experiences than have them themselves. But it is true that there are probably some values that can't be reduced to an experience. For instance most people prefer to actually accomplish their goals in life and would not want to have a faked experience of accomplishing that.

It's absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30

unless the presence of a second person amplifies the actual utility of both individuals.

Give all the drugs to one person. That one person gets hits by a car crossing the street. You've lost -all- your utility.

Similarly, your one EM goes insane.

Your marginal utility calculations don't include -risk-.

The odds of dying in a plane crash are, approximately, one in one million, for a one hour flight. Assuming one flight per month - this is an important person, remember - your actual expected lifetime extension is short the mass value by hundreds of billions. The gains from giving it all to one person are in the hundreds of millions. "All the eggs in one basket" results in massive net loss.

I can't find odds of a healthy mind going insane, but I suspect they're substantially higher than those of dying in a plane crash. Especially in the case of an emulated mind which is outside its "comfort" zone, and -way- outside the lifespan for which that mind has evolved. That is of course assuming an adequately emulated brain.

There are other issues, but this hurdle seems the most obvious.

The odds of dying in a plane crash are, approximately, one in one million, for a one hour flight.

Where did you get that number?

I Googled it and went with the first answer I found, although checking the sources of my source, their numbers are off by a factor of eight; should be around eight million. Which means it would take an average of ~250,000 years to die in the plane crash, rather than the ~20,000 years I originally estimated. The argument remains intact, however; it would take another couple orders of magnitude to marginalize the risk for somebody whose life expectancy is in the hundreds of billions of years, and probably three orders of magnitude to eliminate it as a consideration here.

I look at this and think "diminishing marginal utility" and "negative utility of Death events".

You can of course transform the payoffs to as to restore the situation with real utilons instead of life-years, and then you have the Lifespan Dilemma. Incidentally, now that I know about the fast-growing hierarchy in some detail, I would offer "Increase the ordinal of the function in the fast-growing hierarchy by 1 each time" as a bargain you should take every time unless your utility function in life-years is just plain bounded.

[-][anonymous]11y70

The lifespan dilemma seems very similar to drug use dilemmas in general. For instance, using the generic term "Dose" which could be about a lifespan boosting chemical or a euphoriant drug, they appear to share several premises and points.

Premise 1: The first dose is good.

Premise 2: The next dose is better.

Premise 3: Too many doses will almost certainly hurt/kill you.

Point 1: Precommitting to take a certain arbitrary number of doses can give better results than taking 0, 1 or arbitrarily large doses.

Point 2: Be extremely wary of taking doses from someone you don't trust. They may be trying to pump you for utility.

Also, similar topics appear to come up a lot in Less Wrong debates in general. For instance, the Gandhi and the murder pills dilemma.

I'm not sure if I have any conclusions about this, but it does seem noteworthy.

You're right, this sounds exactly like the Gandhi-murder-pills dilemma. It has all the premises you described, plus the risk of killing innocent people. And it probably has the same solution.

You can of course transform the payoffs to as to restore the situation with real utilons instead of life-years, and then you have the Lifespan Dilemma.

So it seems like I accidentally recreated the Lifespan Dilemma. That makes sense. I skipped the parts of that article that described the LD in detail because I was afraid that if I read about it I would waste huge amounts of my time unsuccessfully trying to solve it and freak out when I was unable to. Which is exactly what happened when I thought up these dilemmas.

blinks

I didn't realize the Lifespan Dilemma was a cognitive hazard. How much freakout are we talking about here?

I thought of this dilemma when I was trying to sleep and found it impossible to sleep afterwards, as I couldn't stop thinking about it. For the rest of the day I had trouble doing anything because I couldn't stop worrying about it.

I think the problem might be that most people seem to feel safe when discussing these sorts of dilemmas, they're thinking about them in Far Mode and just consider them interesting intellectual toys. I used to be like that, but in the past couple years something has changed. Now when I consider a dilemma I feel like I'm in actual danger, I feel the sort of mental anguish you'd feel if you actually had to make that choice in real life. I feel like I was actually offered the lifespan dilemma and really do have to choose whether to accept it or not.

I wouldn't worry about the Lifespan Dilemma affecting most people this way. My family has a history of Obsessive Compulsive Disorder, I'm starting to suspect that I've developed the purely obsessional variety. In particular my freakouts match the "religiosity" type of POOCD, except that since I'm an atheist I worry about philosophical and scientific problems rather than religious ones. Others things I've freaked out about include:

-Population ethics

-Metaethics

-That maybe various things I enjoy doing are actually as valueless as paperclipping or cheesecaking.

-That maybe I secretly have simple values and want to be wireheaded, even though I know I don't want to be.

-Malthusian brain emulators

These freakout are always about some big abstract philosophical issue, they are never about anything in my normal day-to-day life. Generally I obsess about one of these things for a few days until I reach some sort of resolution about it. Then I behave normally for a few weeks until I find something new to freak out over. It's very frustrating because I have a very high happiness set point when I'm not in one of these funks.

Okay, that sounds like it wasn't primarily the fault of the Lifespan Dilemma as such (and it also doesn't sound too far from the amount of sleep I lose when nerdsniped by a fascinating new mathematical concept I can't quite grasp, like Jervell's ordinal notation).

[-]crap11y40

Look. Simple utilitarianism doesn't have to be correct. It looks like a wrong idea to me. Often, when reasoning informally, people confabulate wrong formal sounding things that loosely match their intuitions. And then declare that normative.

Is a library of copies of one book worth the same to you? Is a library of books of 1 author worth as much? Does variety ever truly count for nothing? There's no reason why u("AB") should be equal to u("A")+u("B"). People pick + because they are bad at math , or perhaps bad at knowing when they are being bad at math. edit: When you try to math-ize your morality, poor knowledge of math serves as Orwellian newspeak, it defines the way you think. It is hard to choose correct function even if there was any, and years of practice on too simple problems make wrong functions pop into your head.

The lifespan dilemma applies to all unbounded utility functions combined with expected value maximization, it does not require simple utilitarianism.

Interestingly, I discovered the Lifespan Dilemma due to this post. While not facing a total breakdown of my ability to do anything else, it did consume an inordinate amount of my thought process.

The question looks like an optimal betting problem- you have a limited resource, and need to get the most return. According to the Kelly Criterion, the optimal percentage of your total bankroll looks like f*=(p(b-1)+1)/b, where p is the probability of success, and b is the return per unit risked. The interesting thing here is that for very large values of b, the percentage of bankroll to be risked almost exactly equals the percentage chance of winning. Assuming a bankroll of 100 units and a 20 percent chance of success, you should bet the same amount if b = 1 million or if b = 1 trillion: 20 units.

Eager to apply this to the problem at hand, I decided to plug in the numbers. I then realized I didn't know what the bank roll was in this situation. My first thought was that the bankroll was the expected time left- percent chance of success * time if successful. I think this is the mode that leads to the garden path- every time you increase your time of life if successful, it feels like you have more units to bet with, which means you are willing to spend more on longer odds.

Not satisfied, I attempted to re-frame the question into money. Stating it like this, I have 100$, and in 2 hours I will either have 0$, or 1 million, with an 80% chance of winning. I could trade my 80% chance for a 79% chance of winning 1 trillion. So, now that we are in money, where is my bankroll?

I believe that is the trick- in this question, you are already all in. You have already bet 100% of your bankroll, for an 80% chance of winning- in 2 hours, you will know the outcome of your bet. For extremely high values of b, you should have only bet 80% of your bankroll- you are already underwater. Here is the key point- changing the value of b does not change what you should have bet, or even your bet at all- that's locked in. All you can change is the probability, and you can only make it worse. From this perspective, you should accept no offer that lowers your probability of winning.

I consider fairness to be part of my terminal values, and while I consider myself an utilitarian, I reject both total and average utilitarianism. The same way that here at LW we agree that human values are complicated and that you just can't reduce everything to one value (like "happiness"), I think that the way to aggregate utility among everyone is complicated, and that just taking the average or the sum of all utilities won't work, but we need a compound function, involving average, sum, median, Gini, ... of the individual utilities. And I don't know more the real shape of that function that I know the shape of my utility function, I can tell you what it involves, but not exactly how it works.

The more general problem is that utilitarianism by arithmetic keeps giving apparently weird and nonsensical results. I suspect it's an error to say "the apparently weird and nonsensical result must be of vast importance" rather than "perhaps this approach doesn't work very well".

If I understand probabilities correctly, then gambling for a chance at living either 0 or 99.99 additional years is equivalent to having a certainty of an additional 49.995 years of life, which is better than the certainty of 49.99 years of life I'd have if I didn't make the gamble.

Equivalent only in the sense that the expected values are the same. This equivalence in expected values does not translate to any relation in expected utilities. A utilitarian has no reason to treat a 0 or 99.99 lottery as the same as a certain 49.995, and has no reason to conclude that the lottery is better than a certain 49.99.

Evidently I don't understand how this works. I was under the impression that it was irrational to treat certain and expected values differently.

On the other hand, my math is probably wrong. When I did the same calculations with a lottery that had a 1 in 136 million odds of winning and a 580 million dollar jackpot I calculated that buying a lottery ticket had an expected utility of $3. This seems obviously wrong, otherwise everyone would jump at the chance to spend $1 or $2 on a lottery ticket.

Evidently I'm even worse at math than I thought.

I don't think your math is wrong or bad. Rather, your confusion seems to come from conflating expected values with expected utilities. Consider an agent with several available actions. Learning the expected value (of, for example, money gained) from doing each action does not tell us which action the agent prefers. However, learning the expected utility of each action gives a complete specification of the agent's preferences with respect to these actions. The reason expected utility is so much more powerful is simple—the utility function (of the agent) is defined to have this property.

I gave a brief explanation of utility functions in this previous comment.

In your lottery example, the expected value is $3, but the expected utility is unspecified (and is different for each person). Thus we cannot tell if anyone would want to spend any amount of money on this lottery ticket.

A potential practical worry for this argument: it is unlikely that any such technology will grant just enough for one dose for each person and no more, ever. Most resources are better collected, refined, processed, and utilized when you have groups. Moreover, existential risks tend to increase as the population decreases: a species with only 10 members is more likely to die out than a species with 10 million, ceteris paribus. The pill might extend your life, but if you have an accident, you probably need other people around.

There might be some ideal number here, but offhand I have no way of calculating it. Might be 30 people, might be 30 billion. But it seems like risk issues alone would make you not want to be the only person: we're social apes, after all. We get along better when there are others.

The reason the DMI argument works when arguing for equality of wealth is that people are limited in their ability to get utility from their wealth, because there is only so much time in the day to spend enjoying it.

That is a reason for diminishing marginal utility, not the reason. Maybe it's even enough bigger than all other reasons that thinking of it as the only reason gives you a pretty good approximation to how much marginal utility you gain from each dollar. But just because this particular reason does not apply to lifespans does not imply that you are not allowed to be risk averse about your lifespan. In general, you do not need an excuse for you to be allowed to be risk averse; risk aversion is perfectly compatible with expected utility theory. I think thought experiments along the lines of the one you propose make a compelling demonstration that humans are risk averse about almost everything. This is not inconsistent.

In general, you do not need an excuse for you to be allowed to be risk averse; risk aversion is perfectly compatible with expected utility theory. I think thought experiments along the lines of the one you propose make a compelling demonstration that humans are risk averse about almost everything. This is not inconsistent.

Thank you, for some reason I thought that it was inconsistent, that there was somehow an objective was to determine how to fit probabilities into your utility function. Your comment and others have indicated to me that this is probably not the case.

That's not quite right. A better way to put it is that probabilities are the only thing that there is an objective way to fit into a utility function. If X is worth 1 util, and Y is worth 3 utils, then a lottery that gives you X if a fair coin lands heads and Y if it lands tails is worth 2 utils.

But there is no objective way to fit time into a utility function. It is possible that a 30-year life is worth 200 utils, but a 60-year life is only worth 300 utils, instead of 400.

I'm not sure I get it. What I inferred from your first comment was that it is not irrational to be averse to risky ventures, even if the probabilities seem beneficial. Or to put it another way, the Endowment Effect is not irrational. I am starting to think the Endowment Effect might be responsible for a lot of the hesitancy to engage in lifespan gambles.

But there is no objective way to fit time into a utility function. It is possible that a 30-year life is worth 200 utils, but a 60-year life is only worth 300 utils, instead of 400.

I find this idea disturbing because it might imply that once someone reaches the age of 30 you should (if you can) kill them and replace them with a new person who has the same utility function about their lifespan.

What I inferred from your first comment was that it is not irrational to be averse to risky ventures, even if the probabilities seem beneficial.

That is correct. You are not obligated to value X the same amount as a 50% chance of getting 2X, whether X is a unit of money, lifespan, or whatever. But that's because your utility function does not have to be linear with respect to X. If you say that X is worth 1 util and 3X is worth 2 utils, that's just another way of saying that X is just as valuable as a 50% chance of getting 3X. A utility function is just a way of encoding both the order of your preferences and your response to risk.

Or to put it another way, the Endowment Effect is not irrational.

No, the Endowment Effect is status quo bias, which is different from risk aversion, and which changes your relative preferences when your assessment of the status quo changes, potentially making people decline deals that, if all added together, would leave them strictly better off, so that still is irrational. There are models of risk aversion which are completely time-symmetric (not dependent on the status quo), like exponential discounting.

Given where this conversation is going, I should clarify that the Endowment Effect does not strictly speaking violate the expected utility axioms. It's just that most people have a strong intuition that temporary changes in your ownership of resources that get reversed again before you would even get a chance to use the resources cannot possibly matter, and under that assumption, the Endowment Effect is irrational.

I am starting to think the Endowment Effect might be responsible for a lot of the hesitancy to engage in lifespan gambles.

Only partially. Our risk-aversion with respect to our future lifespan has very little to do the Endowment Effect, and can be modeled by perfectly status quo-ignoring exponential discounting.

However, we also have an intuition that once a person has been created, keeping them alive is more valuable than creating them in the first place. In a sense, this is the Endowment Effect, but unlike in the case of material resources, it does not seem obvious that someone continuing to live a certain amount of time should be just as valuable as someone starting to live the same amount of time. Hence, it is possible to value 60 years of future life less than twice as much as 30 years of future life for someone who already exists, but also value creating one person who will live 60 years more than creating two people who will live 30 years each.

[-][anonymous]11y10

If consequentialism were that straightforward that repugnant conclusion might hold. Killing anyone who reaches age 30, though, would diminish the utility of everyone's lives more significantly than the remaining years they'd lose, for they'd also have the disutility of knowing their days were numbered (and someone would have the disutility of knowing they'd have to perform the act of killing others). Also, one's life has utility to others as well as oneself. If everyone were euthanized at age 30, parenthood would have to begin at age 12 for children to be raised for a full 18 years.

Seems to me there are a few meaningful possibilities here.

The first is that diversity actually is something we value. If so, then trading away all of our diversity in exchange for more years of life (or of simulated life, or whatever) for one individual is a mistake, and our intuition that there's something wrong with doing that can be justified on utilitarian grounds, and there's no paradox.

The second is that diversity actually isn't something we value. If so, then trading away all of our diversity for more years of life (etc) is not a mistake, and our intuition that there's something wrong with doing that is simply false on utilitarian grounds (just like many of our other intuitions are false), and there's still no paradox.

It certainly seems to me that diversity is something I value, so I lean towards the former. But I can imagine being convinced that this is just a cached judgment and not actually something I really value on reflection.

Of course, if it turns out that there's some objective judge of value in the world, such that it might turn out that we don't value diversity but that it's still wrong to trade it all away, or that we do value diversity but it turns out the right thing to do is nevertheless to trade it all away, then there would be a problem. But I don't think that idea is worth taking seriously.

Another possibility is that some of us value it and some of us don't. In which case we ultimately will get some sort of negotiation between the groups over how the drug/emulator/whatever is used. But this will be a negotiation between opposed groups, not a moral argument. (Though of course uttering moral arguments can certainly be part of a negotiation.)

1) I don't find utility monsters hugely counter-intuitive for total util. If you replace the standard utility monster with something like "extremely efficient simulation of many consciousnesses", then converting all our inefficient fleshy beings into a much greater number of (lets say) similarly happy other beings seems a good deal to me. The bite of a utility monster case to mean seems to arise mainly out of the fact we are choosing between one really happy being versus lots of not-so-happy beings.

2) Diminishing marginal returns with lifespan don't strike me as absurd - indeed, they strike me as obvious. A priori, there seem empirical reasons to think in the very long run added lifespan will add less value, as you will do the most important/fun things first, and so extra increments of life will add elements of less value. Perhaps more persuasive is it does capture why we are (generally) risk averse with respect to lifetime gambles (certainty of 20 more years, or half chance at 40?) If marginal returns from lifespan were increasing, then implies we should be risk seeking on these gambles, which seems wrong.

My first reply to you focused on point number 2. But in retrospect, I've realized that your point 1 has utterly horrifying implications.

I don't find utility monsters hugely counter-intuitive for total util. If you replace the standard utility monster with something like "extremely efficient simulation of many consciousnesses", then converting all our inefficient fleshy beings into a much greater number of (lets say) similarly happy other beings seems a good deal to me.

If you want a more grounded form of the utility monster thought experiment there's no need to invoke futuristic technology. You need only consider the many disutility monsters who actually exist right now, in real life. The world is full of handicapped people whose disabilities make their lives harder than normal, but still worth living. If I am not mistaken the ethical theory you are proposing would recommend killing all of them and replacing them by conceiving new healthy people.

Any ethical theory that suggests we ought to kill handicapped people (who are leading worthwhile lives) and replace them with healthy people has totally failed at being an ethical theory [1].

When people have severe health problems or disabilities we do not kill them and use the money we save to pay some young couple to conceive a replacement. We take care of them. True, we don't devote all the resources we possibly can to caring for them but we still try harder than one would expect when taking the total view.

Why do we do this? I'd suspect it would be because people's ethics are more in line with prioritarianism than utilitarianism. Helping the least well off is good even if they are bad at converting resources into utility. Helping the least well off might not have infinite value, there might be some sufficiently huge amount of regular utility that could override it. But it is really, really important.

And no, "not existing" does not count as a form of being "least well off." Human beings who exist have desires and feelings. They have future-directed preferences. If they die they cannot be replaced. Nonexistant people do not have these properties. If they ever do come into existence they will, so it makes sense to make sure the future will be a good place for future people. But since nonexistant people are replaceable it makes sense to sometimes not create them for the sake of those who already do exist.

Now, you might rightly point out that while people seem to favor "prior existance" or "person affecting" viewpoints like this, there is a certain point after which they stop. For instance, most people would find it bad if the human race went extinct in the future, even if its extinction benefited existing people. What that suggests to me is that having a decent amount of worthwhile lives in existence is an important value for people, but one that has diminishing returns relative to other values. Preventing the human race from going extinct is a good thing, but once you've assured a decent amount of people will exist other things become important.

I think that this establishes at least three different values in play, which have diminishing returns relative to each other:

  1. Amount of utility (I am unsure whether that is total or average. It may be some combo of both).

  2. Prioritarianism, (I am unsure as to whether that is traditional prioritarianism, or whether something like Amartya Sen's "capabilities approach" is better).

  3. Number of worthwhile lives (I'm not sure whether having a large number of worthwhile lives is literally what is valuable, or if what is actually valuable is things like love, friendship, diversity, etc. that can only be achieved by having large amounts of worthwhile lives. I suspect the later).

This ethical system can explain why people are willing to not create new people in order to benefit existing people in situations where the population is large, but would not do so if it would cause the human race to go extinct. It can explain why people feel we should still share resources even in "utility monster" scenarios. And it explains why it is bad to kill handicapped people.

(1]. I know Peter Singer has gained some notoriety for suggesting it might be acceptable to kill handicapped infants. However, his argument was based on the idea that the infants had not matured enough to attain personhood yet, not on the idea that it's okay to kill a fully developed person and replace them with someone who might enjoy their life slightly more. Your ethical theory would endorse killing fully grown adult people with normal cognitive skills merely because they possess some physical disability or health problem.

A priori, there seem empirical reasons to think in the very long run added lifespan will add less value, as you will do the most important/fun things first, and so extra increments of life will add elements of less value

I think a more obvious reason is that your risk of developing a health problem that kills you or destroys your quality of life increases as you age. I think an implicit part of the dilemma is that these risks are largely eliminated. I also think that society is currently pretty good at generating new fun and important things to do. Maybe in the very long run it'll run out of steam, but I think adding decades or centuries of healthy years would be about as fun as your first few years of life.

Plus suggesting that lifespan has diminishing marginal returns has the rather unpleasant implication that at some point you'd get more utility by killing someone and replacing them with someone new (who will live the same amount of time as the dead person would have lived), which seems very wrong.

Perhaps more persuasive is it does capture why we are (generally) risk averse with respect to lifetime gambles (certainty of 20 more years, or half chance at 40?)

It seems more likely to me that this is due to the Endowment Effect. I don't know if this is a bias or if it just means that risk aversion with your life is a terminal value.

I think you'll agree that even if lifespan does have diminishing returns, it probably has smaller diminishing returns (past a certain point) than money. Yet people take insanely stupid risks to get huge amounts of money. Maybe that means that risk aversion isn't 100% correlated with diminishing marginal utility.

If marginal returns from lifespan were increasing, then implies we should be risk seeking on these gambles, which seems wrong.

I don't think they're increasing, I think they're fairly constant. I think that if you add another 20 years to someone's life, then, providing they are healthy years and other factors like income and social connectedness stay constant, they will probably produce about the same amount of utility as the previous 20 years would have. In spite of this, I agree with you that it seems wrong to be risk-seeking, but I don't know why. Maybe the endowment effect again.

The initial scenario seems contrived. Your calculation essentially just expresses the mathematical fact that there is a small difference between the numbers 49.99 and 50, which becomes larger when multiplied by 7 billion minus one. What motivates this scenario? How is it realistic or worth considering?

Um. Give every the 49.99 extra years. Use that time to make a second dose for everyone.

I'm seriously confused how this could possibly be considered a hard question...

  1. One could use the classic utilitarian argument in favor of equality: diminishing marginal utility. However, I don't think this works. Humans don't seem to experience diminishing returns from lifespan in the same way they do from wealth. It's absurd to argue that a person who lives to the ripe old age of 60 generates less utility than two people who die at age 30 (all other things being equal).

I'd say that giving all the doses to one person results in less marginal utility per year than dividing the doses among the population, because when one person receives all the doses, they take a hit to the average utility of each of their years by outliving everybody they've ever known and loved.

I would certainly rather live 999 years along with the people I care about than 1000 years, during which I have no peers or loved ones who share comparable lifespans.