3 min read

7

My linked blog post explores a variety of arguments and counterarguments about death, but is not aimed at LW readers. However, there's one particular argument which I wanted to highlight and discuss here:

"For simplicity, let's consider the case where every couple has exactly 2 children no matter how long they live. Then we face the following conundrum: is it better to have x people living for y years each, or to increase longevity and have x/n people living for y*n years each? The population at each time is the same either way, it's merely the distribution of those lives which changes.

In this scenario, for death to be good on a population level, it's not necessary that our lives end up with negative value as our lifespans increase. Instead, it simply needs to be the case that our longer lives end up worse, on average, than those of the children who would have replaced us. Is that likely? Anders Sandberg argues that with longer lives we would "waste" less time, proportionately, in early childhood and in ill health during old age. However, less of our lives would be spent on new and exciting experiences.

The importance of the latter effect depends on whether what we value is more like life satisfaction or more like hedonic experiences. If the former, then repeating experiences probably won't provide much more satisfaction in the overall picture - winning Olympic gold twice is probably not twice as satisfying as winning once. Perhaps we could even get a rough idea of drop-off in marginal value by asking people questions like whether they prefer another 25 years of healthy life for certain over a 50% chance of another 60 years of healthy life and a 50% chance of imminent death. (However, it's unclear how much of a bias towards the former would be introduced by risk aversion.) On the other hand, if hedonic experiences are what we care about, then it's very salient to note that people actually report being happiest around their 70s, and so it seems plausible that we could be equivalently happy for much longer than the normal lifespan.

It's also plausible that many people would be significantly happier than that, because a great deal of sorrow often accompanies both the deaths of loved ones and the contemplation of one's upcoming death. However, this grief seems to usually be manageable when such deaths come at a "natural age"; also, it seems that most people don't actually worry very much about their mortality. Perhaps that is due to an irrational acceptance of death; but it feels circular to argue that death is bad partly because, if people were rational about how bad death is, they would be very sad.

I find myself stuck. I cannot displace the intuition that death is the worst thing that will ever happen to me - on an individual level, a moral atrocity - and that I should delay it as long as possible. However, from the standpoint of population ethics it's quite plausibly better that more people live shorter lives than that fewer people live longer lives. This gives us the strange result that we could prefer a world in which many moral atrocities occur to a world in which very few do, even if the total number of years lived is the same, and the average welfare of those years is very similar. More technically, this is a conflict between a population ethics intuition which is person-affecting (roughly, the view that acts should only be evaluated by their effects on people who already exist) and one which is not. I think there are very good reasons not to subscribe to person-affecting moral theories - since, for example, they can relatively easily endorse human extinction. The problem is that our normal lives are so much based on person-affecting beliefs that being consistent is very difficult - for example, it feels like the moral value of having a child is almost negligible, even though it's roughly consequentially equivalent to saving a life."

I appreciate that the hypothetical of a population which stays the same size no matter how fast or slowly people die is a fairly implausible one, but I think the underlying idea is important: resource constraints will always exist. For example, what moral system would future generations require in order to believe that reviving someone who's been cryonically frozen is preferable to simply creating a new life using those same resources?

New Comment
10 comments, sorted by Click to highlight new comments since:

I think the person-affecting view shouldn't be dismissed so quickly. For example, when we talk about poverty-alleviation or health interventions in EA, we talk about how that's good because it makes actual people better off. Similarly, when something is bad, we point to people for whom it's bad, e.g. those who suffer as a consequence of an action. Saving a life isn't consequentially equivalent to creating one, because the conterfactuals are different: in the former, a life would've been nonconsensually terminated, which is bad for that person, but in the latter, there's no one for whom it would be bad. Nor does the person-affecting view endorse human extinction, though it evaluates it less negatively than total utilitarianism does.

So even if, from a total or average utilitarian view, it would be better if you were eventually replaced by new lives, they wouldn't miss out on anything if they weren't created, so they shouldn't count for anything if deciding not to create them, but those who already exist would count either way.

Interesting points. I agree that the arguments against non-person-affecting views are rather compelling, but still find arguments against person-affecting views even more persuasive. Person-affecting views can easily endorse extinction if it's going to occur when almost everyone currently alive has died anyway - for example, if there is a meteorite 150 years away from destroying the earth and we could easily avert it but would need to raise taxes by 1% to do so, I think most person-affecting views would say to let it hit (assuming it's a secret meteorite, etc).

There's also a second way in which they endorse extinction. Almost nobody can stomach the claim that it's morally neural to create people who you know will be tortured for their whole lives; therefore, person-affecting views often end up endorsing an asymmetry where it's bad to create people with net-negative lives but neutral to create people with net-positive lives. But unless you predict an incredibly utopian future, that's an argument for human extinction right now - since there will otherwise be enough net-negative people in the future to outweigh the interests of everyone currently alive.

I agree that it's weird to think of saving a life as equivalent to creating one, but can we actually defend saving a life as being more important in general? Most basic case: either you can save a 20 year old who will live another 60 years, or else have a child who will live 60 years total. You say that the former is better because it avoids nonconsensual termination. But it doesn't! The 20 year old still dies eventually... Of course, in the latter case you have two nonconsensual deaths not one, but there's an easy fix for that: just raise the child so it's won't be scared of death! I know that sounds stupid but it's sort of what I was getting at when I claimed that some arguments about death are circular: they only apply to people who already think that death is bad. In fact it seems like most people are pretty comfortable with the thought of dying, so raising the child that way wouldn't even be unusual. Under that view, the only reason that death is morally bad is the fact we don't consent to it, and so convincing people not to fear death is just as good for the world as actually making them immortal.

There is no such thing as a population ethics that is independent of individual preferences about populations. If you would rather live longer, but also feel like you would prefer the world in general have some population turnover, this conflict is entirely within your own ethical system.

Humans are not utility maximizers, and you're totally allowed to have difficult and weird problems making ethical choices. But in exchange for this trouble, and this responsibility to live up to your own standards, there's at least the consolation that if you do your best, that's sufficient for you. The universe doesn't judge you, you judge you.

I agree that this is a slightly weird conflict within my own ethical system. The reason that I brought it up here is that this particular conflict follows from a few seemingly plausible claims about marginal value of lives, plus the standard LessWrong beliefs that death (for an individual) and extinction (for our species) are both very bad things. I'm curious whether this is an implicit tension in many people's views, or whether somebody has found an adequate solution which I'm not yet aware of.

I can see the appeal of population turnover, but don't personally find it a persuasive reason to kill people.

A significant confounder here is that currently aging impacts congnitive function, in a way that has impact on these calculations. The solution would largely depend on the specific way in which the aging process is prevented.

Agreed that the specifics matter, but since it's futile to make detailed predictions about those, I'm assuming the simplest case which is for the ageing process to be slowed overall, including in brain cells. (A longevity treatment which left your body healthy but your mind gone wouldn't really deserve the name.) I'm wondering what you're specifically thinking of, though. If, for example, cognitive decline occurred along the same progression that it currently does, but a constant factor more slowly, how would that impact the calculations?

Well, if ageing was slowed proportionally, and the world were roughly unchanged from the present condition, I'd expect large utility gains (in total subjective QoL) from prioritizing longer lives, with diminishing returns to this only in late 100s or possibly 1000s. But I think both assumptions are extremely unlikely.