Thrasymachus comments on Open Thread, February 15-29, 2012 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (194)
Hello there, I'm the guy who wrote the stuff you linked to.
I think it might be worth noting the Rawlsian issue too. If we pretend life is in a finite supply with efficient distribution between persons, then something like "if I extend my life to 10n then 9 other peeps who would have lived n years like me would not" will be true. The problem is this violates norms about what a just outcome is. If I put you and nine others behind a veil of ignorance and offered you an 'everyone gets 80 years' versus 'one of you gets 800, whilst the rest of you get nothing', I think basically everyone would go for everyone getting 80. One of the consequences of that would seem to be expecting whoever 'comes first' in the existence lottery to refrain from life extension to allow subsequent persons to 'have their go'.
If you don't buy that future persons are objects of moral concern, then the foregoing won't apply. But I think there are good reasons to treat them as objects of full moral concern (including a 'right'/'interest' in being alive in the first place). It seems weird (given B theory), that temporally remote people count for less, even though we don't think spatial distance is morally salient. Better, we generally intuit things like a delayed doomsday machine that euthanizes all intelligent life painlessly in a few hundred years is a very bad thing to do.
If you dislike justice (or future persons), there's a plausible aggregate-only argument (which bears a resemblance to Singer's work). Most things show diminishing marginal returns, and plausibly lifespan will too, at least after the investment period: 20 to 40 is worth more than 40-60, etc. If that's true, and lifespan is in finite supply, then we might get more utility by having many smaller lives rather than fewer longer ones suffering diminishing returns. The optimum becomes a tradeoff in minimizing the 'decay' of diminishing returns versus the cost sunk into development of a human being through childhood and adolescence. The optimal lifespan might be longer or shorter than three score and ten, but is unlikely to be really big.
Obviously, there are huge issues over population ethics and the status of future persons, as well as finer grained stuff re. justice across hypothetical individuals. Sadly, I don't have time to elaborate on this stuff before summertime. Happily, I am working on this sort of stuff for an elective in Oxford, so hopefully I'll have something better developed by then!
You lose me the moment you introduce the moral premise. Why is it better for two people to each live a million years than one to live two million? This looks superficially the same sort of question as "Why is it better for two people to each have a million dollars than for one to have two million?", but in the latter scenario, one person has two million while the other has nothing. In the lifetimes case, there is no other person. The moral premise presupposes that nonexistent people deserve some of other peoples' existence in the same way that existing paupers deserve some of other peoples' wealth.
You may have an argument to that effect, but I didn't see it in my speed-run through your slides (nice graphic style, BTW, how do you do that?) or in your comment above. Your argument that we place value on future people only considers our desire to avoid calamities falling upon existent future people.
Diminishing returns for longer lifespans is only a problem to be tackled if it happens. The only diminishing returns I see around me for the lifespans we have result from decline in health, not excess of experience.
The nifty program is Prezi.
I didn't particularly fill in the valuing future persons argument - in my defence, it is a fairly common view in the literature not to discount future persons, so I just assumed it. If I wanted to provide reasons, I'd point to future calamities (which only seem plausibly really bad if future people have interests or value - although that needn't on be on a par with ours), reciprocity across time (in the same way we would want people in the past to weigh our interests equal to theirs when applicable, same applies to us and our successors), and a similar sort of Rawlsian argument that if we didn't know we would live now on in the future, the sort of deal we would strike would be those currently living (whoever they are) to weigh future interests equal to their own. Elaboration pending one day, I hope!
I find this argument incoherent, as I reject the idea of a person at the age of 1 being the same person as they are at the age of 800 - or for that manner, the idea of a person at the age of 400 being the same person as they are at the age of 401. In fact, I reject the idea of personal continuity in the first place, at least when looking at "fairness" at such an abstract level. I am not the same person as I was a minute ago, and indeed there are no persons at all, only experience-moments. Therefore there's no inherent difference in whether someone lives 800 years or ten people live 80 years. Both have 800 years worth of experience-moments.
I do recognize that "fairness" is still a useful abstraction on a societal level, as humans will experience feelings of resentment towards conditions which they perceive as unfair, as inequal outcomes are often associated with lower overall utility, and so forth. But even then, "fairness" is still just a theoretical fiction that's useful for maximizing utility, not something that would have actual moral relevance by itself.
As for the diminishing marginal returns argument, it seems inapplicable. If we're talking about the utility of a life (or a life-year), then the relevant variable would probably be something like happiness, but research on the topic has found age to be unrelated to happiness (see e.g. here), so each year seems to produce roughly the same amount of utility. Thus the marginal returns do not diminish.
Actually, that's only true if we ignore the resources needed to support a person. Childhood and old age are the two periods where people don't manage on their own, and need to be cared for by others. Thus, on a (utility)/(resources invested) basis, childhood and old age produce lower returns. Now life extension would eliminate age-related decline in health, so old people would cease to require more resources. And if people had fewer children, we'd need to invest fewer resources on them as well. So with life extension the marginal returns would be higher than with no life extension. Not only would the average life-year be as good as in the case with no life extension, we could support a larger population, so there would be many more life-years.
One could also make the argument that even if life extension wouldn't reduce the average amount of resources we'd need to support a person, it would still lead to increased population growth. Global trends currently show declining population growth all over the world. Developed countries will be the first ones to have their population drastically reduced (Japan's population began to decrease in 2005), but current projections seem to estimate that the developing world will follow eventually. Sans life extension, the future could easily be one of small populations and small families. With life extension, the future could still be one of small families, but it could be one of much larger populations as population growth would continue regardless. Instead of a planetary population of one billion people living to 80 each, we might have a planetary population of one hundred billion people living to 800 each. That would be no worse than no life extension on the fairness criteria, and much better on the experience-moments criteria.
Hello Kaj,
If you reject both continuity of identity and prioritarianism, then there isn't much left for an argument to appeal to besides aggregate concerns, which lead to a host of empirical questions you outline.
However, if you think you should maximize expected value under normative uncertainty (and you aren't absolutely certain aggregate util or consequentialism is the only thing that matters), then there might be motive to revise your beliefs. If the aggregate concerns 'either way' turn out to be a wash between immortal society and 'healthy aging but die' society, then the justice/prioritarian concerns I point to might 'tip the balance' in favour of the latter even if you aren't convinced it is the right theory. What I'd hope to show is something like prioritarianism at the margin or aggregate indifference (ie. prefer 10 utils to 10 people instead of 100 to 1 and 0 to 9) is all that is needed to buy the argument.
True, and I probably worded my opening paragraph in an unnecessarily aggressive way, given that premises such as accepting/rejecting continuity aren't really correct or wrong as such. My apologies for that.
If there did exist a choice between two scenarios where the only difference related to your concerns, then I do find it conceivable - though maybe unlikely - that those concerns would tip the balance. But I wouldn't expect such a tight balance to manifest itself in any real-world scenarios. (Of course, one could argue that theoretical ethics shouldn't concern itself too much with worrying about its real world-relevance in the first place. :)
I'd still be curious to hear your opinion about the empirical points I mentioned, though.
I'm not sure what to think about the empirical points.
If there is continuity of personal identity, then we can say that people 'accrue' life, and so there's plausibly diminishing returns. If we dismiss that and talk of experience moments, then a diminishing argument would have to say something like "experience-moments in 'older' lives are not as good as younger ones". Like you, I can't see any particularly good support for this (although I wouldn't be hugely surprised if it was so). However, we can again play the normative uncertainty card to just mean our expected degree of diminishing returns are attenuated by * P(continuity of identity)
I agree there are 'investment costs' in childhood, and if there are only costs in play, then our aggregate maximizer will want to limit them, and extending lifetime is best. I don't think this cost is that massive though between having it once per 80 years or once per 800 or similar. And if diminishing returns apply to age (see above), then it becomes a tradeoff.
Regardless, there are empirical situations where life-extension is strictly win-win: so if we don't have loads of children and so we never approach carrying capacity. I suspect this issue will be at most a near-term thing: our posthuman selves will assumedly tile the universe optimally. There are a host of counterveiling (and counter-counterveiling) concerns in the nearer term. I'm not sure how to unpick them.
I'm not sure how this follows, even presuming continuity of personal identity.
If you were running a company, you might get diminishing returns in the number of workers if the extra workers would start to get in each other's way, or the amount of resources needed for administration increased at a faster-than-linear speed. Or if you were planting crops, you might get diminishing returns in the amount of fertilizer you used, since the plants simply could not use more than a certain amount of fertilizer effectively, and might even suffer from there being too much. But while there are various reasons for why you might get diminishing returns in different fields, I can't think of plausible reasons for why any such reason would apply to years of life. Extra years of life do not get in each other's way, and I'm not going to enjoy my 26th year of life less than my 20th simply because I've lived for a longer time.
I was thinking something along the lines that people will generally pick the very best things, ground projects, or whatever to do first, and so as they satisfy those they have to go on to not quite so awesome things, and so on. So although years per se don't 'get in each others way', how you spend them will.
Obviously lots of counterveiling concerns too (maybe you get wiser as you age so you can pick even more enjoyable things, etc.)
That sounds more like diminishing marginal utility than diminishing returns. (E.g. money has diminishing marginal utility because we tend to spend money first on the things that are the most important for us.)
Your hypothesis seems to be implying that humans engage in activities that are essentially "used up" afterwards - once a person has had an awesome time writing a book, they need to move on to something else the next year. This does not seem right: rather, they're more likely to keep writing books. It's true that it will eventually get harder and harder to find even more enjoyable activities, simply because there's an upper limit to how enjoyable an activity can be. But this doesn't lead to diminishing marginal utility: it only means that the marginal utility of life-years stops increasing.
For example, suppose that somebody's 20. At this age they might not know themselves very well, doing some random things that only give them 10 hedons worth of pleasure a year. At age 30, they've figured out that they actually dislike programming but love gardening. They spend all of their available time gardening, so they get 20 hedons worth of pleasure a year. At age 40 they've also figured out that it's fun to ride hot air balloons and watch their gardens from the sky, and the combination of these two activities lets them enjoy 30 hedons worth of pleasure a year. After that, things basically can't get any better, so they'll keep generating 30 hedons a year for the rest of their lives. There's no point at which simply becoming older will derive them of the enjoyable things that they do, unless of course there is no life extension available, at which case they will eventually lose their ability to do the things that they love. But other than that, there will never be diminishing marginal utility.
Of course, the above example is a gross oversimplification, since often our ability to do enjoyable things is affected by circumstances beyond our control, and it is likely to go up and down over time. But these effects are effectively random and thus uncorrelated with age, so I'm ignoring in them. In any case, for there to be diminishing marginal utility for years of life, people would have to lose the ability to do the things that they enjoy. Currently they only lose it due to age-related decline.
I would also note that your argument for why people would have diminishing marginal utility in years of life doesn't actually seem to depend on whether or not we presume continuity of personal identity. Nor does my response depend on it. (The person at age 30 may be a different person than the one at age 20, but she has still learned from the experiences of her "predecessors".)
If you are arguing that we should let people die and then replace them with new people due to the (strictly hypothetical) diminishing utility they get from longer lives, you should note that this argument could also be used to justify killing and replacing handicapped people. I doubt you intended that way, but that's how it works out.
To make it more explicit, in a utilitarian calculation there is no important difference between a person whose utility is 5 because they only experienced 5 utility worth of good things, and someone whose utility is 5 because they experienced 10 utility of good things and -5 utility worth of bad things. So a person with a handicap that makes their life difficult would likely rank about the same as a person who is a little bored because they've done the best things already.
You could try to elevate the handicapped person's utility to normal levels instead of killing them. But that would use a lot of resources. The most cost-effective way to generate utility would be to kill them and conceive a new able person to replace them.
And to make things clear, I'm not talking about aborting a fetus that might turn out handicapped, or using gene therapy to avoid having handicapped children. I'm talking about killing a handicapped person who is mentally developed enough to have desires, feelings, and future-directed preferences, and then using the resources that would have gone to support them to concieve a new, more able replacement.
This is obviously the wrong thing to do. Contemplating this has made me realize that "maximize total utility" is a limited rule that only works in "special cases" where the population is unchanging and entities do not differ vastly in their ability to convert resources into utility. Accurate population ethics likely requires some far more complex rules.
Morality should mean caring about people. If your ethics has you constantly hoping you can find a way to kill existing people and replace them with happier ones you've gone wrong somewhere. And yes, depriving someone of life-extension counts as killing them.
Why should morality mean caring about the people who exist now, rather than caring about the people who will exist in a year?
Obviously it's morally good to care about people who will exist in a year. The "replacements" that I am discussing are not people who will exist. They are people who will exist if and only if someone else is killed and they are created to replace them.
Now, I think I typical counterargument to the point I just made is to argue that, due to the butterfly effect, any policy made to benefit future people will result in different sperms hitting different ovums, so the people who benefit from these policies will be different from the people who would have suffered from the lack of them. From this the counterarguer claims that it is acceptable to replace people with other people who will lead better lives.
I don't think this argument holds up. Future people do not yet have any preferences, since they don't exist yet. So it makes sense to, when considering how to best benefit future people, take actions that benefit future people the most, regardless of who those people end up being. Currently existing people, by contrast, already have preferences. They already want to live. You do them a great harm by killing and replacing them. Since a future person does not have preferences yet, you are not harming them if you make a choice that will result in a different future person who has a better life being born instead.
I appreciated the level of thought you put into the argument, even though it does not actually convince me to oppose life extension. Thank you for writing (and prezi-ing) it, I look forward to more.
Basically, the hidden difference if you put me and 9 others behind a veil of ignorance and ask us to decide whether we each get 80 or one gets 800, is that in that case you have the presence of 10 people competing and trying to avoid being "killed" whereas in the choice between creating one 800 year old versus 10 80 year olds is conducted without an actual threat being posed to anyone.
While you can establish that the 10 people would anticipate with fear (and hence generate disutility) the prospect of being destroyed / prevented to live, that's not the same as establishing that 9 completely nonexistent people would generate the same disutility even if they never started to exist.
I don't think the thought experiment hinges on any of this. Suppose you were on you own and Omega offered you certainty of 80 years versus 1/10 of 800 and 9/10 of nothing. I'm pretty sure most folks would play safe.
The addition of people makes it clear if (grant the rest) a society of future people would want to agree that those who 'live first' should refrain from life extension and let the others 'have their go'.
Loss aversion is another thing altogether, if most people choose 80 sure years instead of 800 years at a 1/10 risk it doesn't necessarily prove that it is actually less valuable.
Suppose Omega offers to copy you and let you live out 10 lives simultaneously (or one after another, restoring from the same checkpoint each time) on the condition that each instance dies and is irrecoverably deleted after 80 years. Is that worth more than spending 800 years alive all in one go?
Plausibly, depending on your view of personal identity, yes.
I won't be identical to my copies, and so I think I'd play the same sorts of arguments I want to do so far - copies are potential people, and behind a veil of ignorance between whether I'd be a copy or the genuine article, the collection of people would want to mutually agree the genuine article picks the former option in Omegas gamble.
(Aside: loss/risk aversion is generally not taken to be altogether different from justice. I mean, veil of ignorance heuristic specifies a risk averse agent, and difference principle seems to be loss averse.