Near Concerns About Life Extension

-9 Bart119 08 June 2012 07:12PM

There's near thinking and far thinking. While LWers debate far questions, near questions remain. To take a few examples, we in the US still spend large sums on special interests through subsidies and tax breaks, and jockeying for partisan advantage makes pursuit of sound policies very difficult.

Cryonic revival, FAIs, UFAIs colonizing other star systems -- they all seem pretty far out in the future to me and a lot of other people. So a tiny band of LWers and like-minded people work on what they have a passion for, which is as it should be.

But there is one area where progress really seems feasible within a few years: radical life extension. The right combination of drugs affecting gene expression just might do it.

Eliezer, in The Moral Void and elsewhere, makes clear a passionate if not fanatical commitment to longer life, and it seems the common LW view. Yet surely economics must come into play. If we can give a 60-year-old another day of high-quality life for $10, no one would question that we should. But suppose the cost of each extra day doubles. At 20 days the cost is on the order of $10,000,000 a day. Although we might try to couch it in kinder terms, at some point we end up saying to this person, "Sorry, you die today because we can't afford what it costs to keep you alive until tomorrow." Harsh, but inevitable.

I assume exponential costs get through to thinkers even in far mode. I see economics downplayed throughout LW thinking, on the assumption that radical improvements are possible. They might happen in the longer term, but not within the course of a few years. In the next few years, mere linear increases in costs are very relevant.

The LW thinking on life extension assumes that (1) it is vigorous, healthy life we will extend, not decrepit, depressed old age, and (2) it will be affordable to all. When thinking to a far future, it's easy to assume such conditions will be met. But when a technology is right in front of us, timing issues can be extremely important.

It is a good guess that if life-extension technology comes to the market, the demand will be intense and immediate. It's a good guess that it will be expensive (the cheaper it is, the less drug companies will be motivated to develop it). And it's also a good guess that the first people to sign up in the face of risks and side effects will be the old, who have little to lose.

We face the prospect of sucking up larger and larger portions of our economy extending the lives of decrepit, demoralized old people. (We already face this trend, but the life-extension technologies that are in the offing would make it much worse).

From a distance, thinkers will see an unsustainable pattern. Yet faced with the prospect of immediate extinction, these old people and their loved ones will demand the therapies. They would be upset to think that the drugs are in the closet right down the hall but they aren't eligible to get them. They'd be less upset to know that the drugs are not approved yet, and still less upset to know that there is no clear evidence that they work in humans, and so on.

My plea is to keep life-extension therapies far from the market until all the conditions are in place to solve the problems of cost and making sure that the path is clear to extending healthy life, not decrepitude. This should include an enthusiastic, positive attitude towards life instead of weariness and depression.

There are other reasons why we should be wary of such new technologies, but these short-term, practical ones seem like a clear case that is largely independent of one's utility function.

Comment author: Bart119 08 June 2012 04:21:14PM 1 point [-]

I'm trying to find out how short-term or long-term your thinking is. Moving to Mars seems very fragile, depending on constant input from planet earth. The challenges of moving to another star system where you could have a self-sustaining life are immense. Neither option is available in your lifetime. I think quantitative estimates of the survival of sapience on earth are pretty much useless -- the uncertainties of individual estimates are way too high. As a young man in 1981 I debated moving from the US to Australia as a hedge against nuclear war, a much more modest proposition. I decided not to, partly because I could be a more effective activist against nuclear war in a superpower that was my native culture. So if you go down this path, you could think of your utility in terms of preventing the destruction of sapience on earth.

Comment author: buybuydandavis 03 June 2012 12:31:28AM 0 points [-]

Or maybe the frozen people won't have let their opinions slip as the winds shifted. They'll see the theocratic takeover as a fait accompli, won't be on the record as opposing it, and so will be able to declare their allegiance to the Great Ju Ju and avoid torture altogether.

Or maybe, being from the past will confer a special honor and status with the Great Ju Ju, so that it will be extra wonderful to be a thawed human popsicle.

We can play outlandish maybes til the cows come home. Averaging over my probabilities for all hypothetical futures, I'd rather be alive than not 500 years from now.

Too many arguments in the world of the form "but what if Horrible Scenario occurs?" If Horrible Scenario occurs, I'll be fucked. That's the answer. Can't deny it. But unless you have information to share that significantly increases my probabilities for Horrible Scenarios, merely identifying bad things that could happen is neither a productive nor a fun game.

Comment author: Bart119 03 June 2012 08:28:19PM 0 points [-]

The initial question was just meant to open the issue of future negatives, and having gotten some feedback on how the issue had been discussed before, I gave the bulk of my thoughts in a reply to my initial post.

What I consider much more realistic possibilities (more realistic than benign, enlightened resurrection) are being revived with little regard to brain damage and to serve the needs of an elite. I laid it out in my other response in this thread (I don't know how to link to a particular comment in a thread, but search for 'When I started this thread'.)

Comment author: othercriteria 02 June 2012 03:26:56PM 0 points [-]

Immortality is still possible. [...] You can still have hope, but it doesn't rest on spending large sums on freezing your brain.

How does this follow? Even your most powerful argument/worst-case scenario has immortality as its outcome, just not completely on your own terms. To what extent are we not "[serving] the ends of the elite" and "prevented from taking [our] own life if [we] found it miserable" even now?

Comment author: Bart119 02 June 2012 05:58:12PM 0 points [-]

Even your most powerful argument/worst-case scenario has immortality as its outcome

By "possible", I meant that we can imagine scenarios (however unlikely) where we will be immortal. Cryonics also relies on scenarios (admittedly not quite as unlikely) where we would at least have much longer lives, though not truly immortal. If being alive for a thousand years with serious brain damage still strikes you as much preferable to death, then I agree that my argument does not apply to you.

To what extent are we not "[serving] the ends of the elite" and "prevented from taking [our] own life if [we] found it miserable" even now?

In the US today, as a person of no particular import to the government, I feel I have considerable freedom to live as I want, and no one is going to stop me from killing myself if I choose. If on some construal I inevitably serve the elite today, I at least have a lot of freedom in how I do that. Revived people in a future world might be of enough interest that they could be supervised so carefully that personal choice would be severely limited and suicide would be impossible.

Comment author: Bart119 02 June 2012 03:01:59PM -2 points [-]

When I started this thread, I wasn't quite sure where it was going to end up. But here's what I see as the most powerful argument:

An enlightened, benign future society might revive you to let you live life to your full potential, for your sake -- when it is convenient for them. But a future society that has morality in line with some pretty good present ones (not the very best) might see you as a precious commodity to revive for the ends of the elite. An enlightened society would not revive you if you were going to be miserable with serious brain damage, but a less enlightened society would have few qualms about that. Even if revived intact, you would still serve the ends of the elite and might well be prevented from taking your own life if you found it miserable.

I judge the latter scenario much more likely than the former. If so, cryonic preservation's appeal would be much less -- it might even be something you would pay to get out of!

You who are cryonics enthusiasts who are also committed to the LW method should think about this. Maybe you will judge the probabilities of the future scenarios differently, but there are strong cognitive biases at work here against an accurate analysis.

Immortality is still possible. We might be subjects in an experiment, and when we croak our brains might be uploaded by the compassionate experimenters. Maybe the theists are right (there sure are a lot of them) and maybe the ones are right who preach universal salvation. You can still have hope, but it doesn't rest on spending large sums on freezing your brain.

Comment author: Synaptic 02 June 2012 02:44:48AM 0 points [-]

But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?

Adele_L in a comment in this thread:

based on the general trend that societies with higher levels of technology tend to be better ones to live in

Comment author: Bart119 02 June 2012 03:54:33AM -1 points [-]

While I admit that a theocratic torturing society seems less likely to develop the technology to revive people, I'm not at all sure that an enlightened one is more likely to do so than the one I assumed as the basis of my other examples. A society could be enlightened in various ways and still not think it a priority to revive frozen people for their own sake. But a society could be much more strongly motivated if it was reviving a precious commodity for the selfish ends of an elite. This might also imply that they would be less concerned about the risk of things like brain damage that would interfere with the revivee's happiness but still allow them to be useful for the reviver's purposes.

Comment author: buybuydandavis 02 June 2012 12:21:38AM 0 points [-]

If there is a big enough change in the species, I can see you being revived for experimental purposes.

If a theocratic state wanted to torture the infidels, I don't see how being a human popsicle is going to make the situation any worse.

But that the dread of something after death,— The undiscover’d country, from whose bourn No traveller returns,—puzzles the will, And makes us rather bear those ills we have Than fly to others that we know not of?

Comment author: Bart119 02 June 2012 01:43:40AM 0 points [-]

The idea is that everyone who wasn't frozen got a chance to see it coming and convert, maybe two or three times as winds shifted?

Comment author: Bart119 01 June 2012 11:04:40PM 0 points [-]

Interesting. The referenced discussions often assume the post-singularity AI (which for the record I think very unlikely). The development of that technology is likely to be, if not exactly independent, only loosely correlated with the technology for cryonic revival, isn't it?

Certainly you have to allow for the possibility of cryonic revival without the post-singularity AI, and I think we can make better guesses about the possible configurations of those worlds than post-AI worlds.

I see the basic pro-cryonics argument as having the form of Pascal's wager. Although the probability of success might be on the low side (for the record, I think it is very low), the potential benefits are so great that it is worth it. The cost is paid in mere money. But is it?

In my main post I used the "torture by theocracy" example as an extreme, but I think there are many other cases to worry about.

Suppose that among a population of billions, there are a few hundred people who can be revived. The sort of society we all hope for might just revive them so they can go on to achieve their inherent potential as they see fit. But in societies that are just a bit more broken than our own, those with the power to cause revival may have self-interest very much in mind. You can imagine that the autonomy of those who are revived would be seriously constrained, and this by itself could make a post-revival life far from what people hope. The suicide option might be closed off to them entirely; if they came to regret their continued existence they might well be unable to end it.

Perhaps the resurrected will have to deal with the strange and upsetting limitations that today's brain damage patients face. Perhaps future society will be unable to find a way for revived people to overcome such problems, and yet keep them alive for hundreds of years -- they are just too valuable as experimental subjects.

Brain damage aside, what value will they have in a future society? They will have had unique and direct knowledge of life in a bygone century, including its speech patterns and thought patterns. I think modern historians would be ecstatic at the prospect of being able to observe or interview pockets of people from various epochs in history, including ancient ones (ethical considerations aside).

Perhaps they will be valued as scientific subjects and carefully insulated from any contaminating knowledge of the future world as it has developed. That might be profoundly boring and frustrating.

Perhaps the revived will be confined in "living museums" where they face a thousand years re-enacting what life was like in 21st century America -- perhaps subject to coercion to do it in a way that pleases the masters.

If the revived people are set free, what then? Older people in every age typically shake their heads in dismay at changes in the world; this effect magnified manyfold might be profoundly unsettling -- downright depressing, in fact.

One can reasonably object that all of these are all low-probability. But are they less probable than the positive high-payoff scenarios (in just, happy societies that value freedom, comfort, and the pursuit of knowledge)? Evidence? Are you keeping in mind optimism bias?

In deciding in favor of cryonic preservation, I don't think the decision can be near costs traded off against scenarios of far happiness. There's far misery to consider as well.

Far negatives of cryonics?

3 Bart119 01 June 2012 06:43PM

In considering the pros and cons of cryonics, has anyone addressed the possibility of being revived in an unpleasant future, for instance as a "torture the infidel" exhibit in a theme park of a theocratic state? I had some thoughts on the issue but figured I would see what else has been written previously.

 

In response to The Moral Void
Comment author: Bart119 31 May 2012 08:24:12PM 0 points [-]

Leon Kass (of the President's Council on Bioethics) is glad to murder people so long as it's "natural", for example. He wouldn't pull out a gun and shoot you, but he wants you to die of old age and he'd be happy to pass legislation to ensure it.

Does anyone have sources to support this conclusion about Kass's views? I tracked down a transcript of an interview he gave that was cited on a longevity website, but it doesn't support that characterization at all. He does express concerns about greatly increased lifespans, but makes clear that he sees both sides. He opposed regulation of aging research:

http://www.sagecrossroads.net/files/transcript13.pdf

View more: Prev | Next