Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Do you think immortatility is technically possible for human beings?
I don't think immortality is technically possible -- evolution has installed many many mechanisms to ensure that organisms die and make room for the next generation. I bet it is going to be very hard to completely overcome all these mechanisms.
This seems to me, at first blush, to exhibit the Evolution of Species Fairy fallacy. Evolution doesn't work to benefit species, populations, or the "next generation". If a mutation arises that increases longevity, and has no other downsides, then animals with that mutation should become more common in the gene pool, because they die less often. I remember reading that the effect would not be very strong, because most animals don't die of old age. But why would there be the opposite effect?
I am loath to attribute a very basic error to a distinguished professor of biology. Is there another explanation? Is the claim that evolution selects for mortality true?
Note: Eric went on to add:
I'm also not convinced immortality is such a good idea. A lot of human progress depends on having a new generation with new ideas. Immortality may equal stagnation.
This seems to be blatant rationalization of a preconceived idea that death is good. (I doubt he truly believes that extra progress is worth everybody dying.) So perhaps his first statement is also a form of rationalization. But it seems improbable to me that he would make such a statement about biology if he didn't think it well-founded. More likely there's something I'm misunderstanding.
ETA: one of the first Google results is this page at nature.com, The Evolution of Aging by Daniel Fabian, which goes into some depth on the subject. The bottom line is that it agrees with my expectation that evolution does not select for mortality. Choice quotes:
The Roman poet and philosopher Lucretius, for example, argued in his De Rerum Natura (On the Nature of Things) that aging and death are beneficial because they make room for the next generation (Bailey 1947), a view that persisted among biologists well into the 20th century. [...]
A more parsimonious evolutionary explanation for the existence of aging therefore requires an explanation that is based on individual fitness and selection, not on group selection. This was understood in the 1940's and 1950's by three evolutionary biologists, J.B.S. Haldane, Peter B. Medawar and George C. Williams, who realized that aging does not evolve for the "good of the species". Instead, they argued, aging evolves because natural selection becomes inefficient at maintaining function (and fitness) at old age. Their ideas were later mathematically formalized by William D. Hamilton and Brian Charlesworth in the 1960's and 1970's, and today they are empirically well supported. Below we review these major evolutionary insights and the empirical evidence for why we grow old and die.
How could a distinguished professor of biology, a leader of the HGP and advisor to the US President, get something so elementary wrong, when even a biology undergrad dropout like myself notices this seems wrong?
ETA #2: Gwern points to the Wikipedia article on Evolution of Ageing, which lists several competing theories of the evolution of aging (and therefore mortality). This shows the subject is more complex than I had thought and there may be good reason to believe mortality is selected for by evolution (or at least is reliably linked to something else that is selected).
I should be glad that I didn't discover an obvious error being committed by a distinguished professional, even if he may be ultimately wrong!
If the many worlds of the Many Worlds Interpretation of quantum mechanics are real, there's at least a good chance that Quantum Immortality is real as well: All conscious beings should expect to experience the next moment in at least one Everett branch even if they stop existing in all other branches, and the moment after that in at least one other branch, and so on forever.
However, the transition from life to death isn't usually a binary change. For most people it happens slowly as your brain and the rest of your body deteriorates, often painfully.
Doesn't it follow that each of us should expect to keep living in this state of constant degradation and suffering for a very, very long time, perhaps forever?
I don't know much about quantum mechanics, so I don't have anything to contribute to this discussion. I'm just terrified, and I'd like, not to be reassured by well-meaning lies, but to know the truth. How likely is it that Quantum Torment is real?
What looks, at the moment, as the most feasible technology that can grant us immortality (e.g., mind uploading, cryonics)?
I posed this question to a fellow transhumanist and he argued that cryonics is the answer, but I failed to grasp his explanation. Besides, I am still struggling to learn the basics of science and transhumanism, so it would be great if you could shed some light on my question.
Personal Note: I would like to thank Normal Anomaly for beta-ing this for me and providing counter-arguments. It am asking him/her to comment below, so that everyone can give him/her karma for volunteering and helping me out. Even if you dislike the article, I think it's awesome that they were willing to take time out of their day to help someone they've never met.
Imagine that you live in a world where everyone says "AI is a good idea. We need to pursue it."
But what if no one really thought that there was any reason to make sure the AI was friendly. That would be bad, right? You would probably think: "Hey, AI is a great goal and all, but before we start pursuing it and actually developing the technology, we need to make sure that it's not going to blow up in our faces!"
That seems to me to be a rational response.
Yet it seems like most people are not applying the same thought processes to life-extending technology. This website in particular has a habit of using some variant of this argument: "Death is bad. Not dying is good. Therefore life-extending technologies are also good" However this is missing the same level of contemplation that has been given to AI. Like AI, there are considerations that must be made to ensure this technology is "friendly".
Most transhumanists have heard many of these issues before, normally sandwiched inside of a "Death is Bad" conversation. However these important considerations are often hand-waved away, as the conversation tends to stick to the low-hanging fruit. Here, I present them all in one place, so we can tackle them together, and perhaps come up with some solutions:
- Over-population: For example, doubling the life-span of humans would at the very least double the number of people on this planet. If we could double life-spans today, we would go from 7 billion to 14 billion people on Earth in 80 years, not counting regular population growth.
Although currently birthrates are falling, all birthrate information we have is for women being fertile for approximately 25 years. This has not changed much throughout history, so we cannot necessarily extrapolate the current birthrate to what it would be if women were fertile for 50 years instead.
In other words, not only will there be a population explosion due to people living longer, but I'd be willing to bet that if life-extension was available today, birth rates would also go up. Right now, people who like to have kids only have enough money and fertile years to raise on average 2-3 kids. If you doubled the time they would have to reproduce, you will likely double the amount of children that child-rearing families have.
For example, in modern society, by the time a woman's children are out of the house and done with college, the woman is no longer young and/or fertile. Say for example you had a child when you were 25. By the time your children were 20 you would be 45, and therefore not at a comfortable age to have children. However, if 45 becomes a young/fertile age for women, families might likely decide to re-reproduce.
It's one thing to say: "Well, we will develop technology to increase food yields and decrease fossil food consumption", but are you positive we will have those technologies ready to go in time to save us?
- Social Stagnation: Have you ever tried having a long conversation with an elderly person, only to realize that they are bigots/homophobes/racists, etc? We all love Grandpa John and Grammy Sue, but they have to die for society to move forward. If there were 180 year-olds alive today, chances are pretty strong that a good amount of them would think that being anti-slavery is pretty progressive. They would have been about 90 years old when women got the right to vote.
We don't so much change our minds, and we grow new people and the old ones die.
- Life sucks, but at least you die: The world is populated with people suffering with mental disorders like depression, social issues like unemployment, and physical deprivations like poverty and hunger.
It doesn't make sense to extend life until we have made our lives worth extending.
- Unknown Implications: How will this change the way society works? How will it change how people live their lives? We can have some educated guesses, but we won't know for sure what far-spread effects this would have.
I have a friend who is a professional magician and "psychic", and about a month ago I convinced him to read HPMoR. After cursing me for ruining his sleep schedule for two days, we ended up having a discussion about some of the philosophies in there that we agreed and disagreed with. I was brand-new to LW. He had no prior knowledge of "rationality", but like most of his profession was very analytically minded. I would like to share something he wrote:
We have a lot of ancient wisdom telling us that wishes are bad because we aren't wise, and you're saying... that if we could make ourselves wise, then we can have wishes and not have it blow up in our faces.
See the shortest version of Alladin's Tale:
Wish One: "I wish to be wise."
Since... I am NOT mature, fully rational, and wise,
I really think I shouldn't have wishes,
Of which, immortality is an obvious specific example.
Because I'm just not convinced
That I can predict the fallout.
I call this "The CEV of Immortality", although at the time, neither of us had heard of the concept of CEV in the first place. The basic idea being that we are not currently prepared enough to even be experimenting with life-extending technologies. We don't know where it will lead and how we will cope.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it's developed there's no going back. If the technology exists, people will stop at nothing to use it. You won't be able to control it.
Just like AI, life-extending technologies are not inherently "bad". But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it's out of the box, it's too late.
(Provided by Normal Anomaly)
Overpopulation Counter-argument: Birth rates are currently going down, and have fallen below replacement in much of the developed world (including the US). According to an article in The Economist last year, population will peak at about 10-11 billion in about 2050. This UN infographic appears to predict that fewer people will be born in 2020-2050 then were born in 1980-2010. I am skeptical that birth rate will increase with life extension. Space colonization is another way of coping with more people (again on a longer timescale than 40 years.) Finally, life extension will probably become available slowly, at first only a few extra years and only for the wealthy. This last also applies to “unknown implications.”
Social Stagnation Counter-argument: This leads to a slippery slope argument for killing elderly people; it’s very unlikely that our current lifespans are at exactly the right tradeoff between social progress and life. Banning elderly people from voting or holding office would be more humane for the same results. "Life sucks" Counter argument: This is only an argument for working on making life worth extending, or possibly an argument for life extension not having the best marginal return in world-improvement. Also, nobody who doesn’t want to live longer would have to, so life extension technology wouldn’t result in immortal depressed people.
These counter-arguments are very good points, but I do not think it is enough to guarantee a 100% "Friendly" transhumanism. I would love to see some discussions on them.
Like last time I posted, I am making some "root" comments. They are: General comments, Over-population, Social stagnation, Life sucks, Unknown consequences. Please put your comment under the root it belongs to, in order to help keep the threads organized. Thank you!
Let it be noted, as an aside, that this is my first post on Less Wrong and my first attempt at original, non-mandatory writing for over a year.
I've been reading through the original sequences over the last few months as part of an attempt to get my mind into working order. (Other parts of this attempt include participating in Intro to AI and keeping a notebook.) The realization that spurred me to attempt this: I don't feel that living is good. The distinction which seemed terribly important to me at the time was that I didn't feel that death was bad, which is clearly not sensible. I don't have the resources to feel the pain of one death 155,000 times every day, which is why Torture v. Dust Specks is a nonsensical question to me and why I don't have a cached response for how to act on the knowledge of all those deaths.
The first time I read Torture v. Dust Specks, I started really thinking about why I bother trying to be rational. What's the point, if I still have to make nonsensical, kitschy statements like "Well, my brain thinks X but my heart feels Y," if I would not reflexively flip the switch and may even choose not to, and if I sometimes feel that a viable solution to overpopulation is more deaths?
I solved the lattermost with extraterrestrial settlement, but it's still, well, sketchy. My mind is clearly full of some pretty creepy thoughts, and rationality doesn't seem to be helping. I think about having that feeling and go eeugh, but the feelings are still there. So I pose the question: what does a person do to click that death is really, really bad?
The primary arguments I've heard for death are:
- "I look forward to the experience of shutting down and fading away," which I hope could be easily disillusioned by gaining knowledge about how truly undignified dying is, bloody romanticists.
- "There is something better after life and I'm excited for it," which, well... let me rephrase: please do not turn this into a discussion on ways to disillusion theists because it's really been talked about before.
- "It is Against Nature/God's Will/The Force to live forever. Nature/God/the Force is going to get humankind if we try for immortality. I like my liver!" This argument is so closely related to the previous and the next one that I don't know quite how to respond to it, other than that I've seen it crop up in historical accounts of any big change. Human beings tend to be really frightened of change, especially change which isn't believed to be supernatural in origin.
- "I've read science fiction stories about being immortal, and in those stories immortality gets really boring, really fast. I'm not interested enough in reality to be in it forever." I can't see where this perspective could come from other than mind-numbing ignorance/the unimaginable nature of really big things (like the number of languages on Earth, the amount of things we still don't know about physics or the fact that every person who is or ever will be is a new, interesting being to interact with.)
- "I can't imagine being immortal. My idea about how my life will go is that I will watch my children grow old, but I will die before they do. My mind/human minds aren't meant to exist for longer than one generation." This fails to account for human minds being very, very flexible. The human mind as we know it now does eventually get tired of life (or at least tired of pain,) but this is not a testament to how minds are, any more than humans becoming distressed when they don't eat is a testament to it being natural to starve, become despondent and die.
- "The world is overpopulated and if nobody dies, we will overrun and ultimately ruin the planet." First of all: I, like Dr. Ian Malcolm, think that it is incredibly vain to believe that man can destroy the Earth. Second of all: in the future we may have anything from extraterrestrial habitation to substrates which take up space and consume material in totally different ways. But! Clearly, I am not feeling these arguments, because this argument makes sense to me. Problematic!
I think that overall, the fear most people have about signing up for cryonics/AI/living forever is that they do not understand it. This is probably true for me; it's probably why I don't grok that life is good, always. Moreover, it is probable that the depictions of death as not always bad with which I sympathize (e.g. 'Lord, what can the harvest hope for, if not for the care of the Reaper Man?) stem from the previously held to be absolute nature of death. That is, up until the last ~30 years, people have not been having cogent, non-hypothetical thoughts about how it might be possible to not die or what that might be like. Dying has always been a Big Bad but an inescapable one, and the human race has a bad case of Stockholm Syndrome.
So: now that I know I have and what I want, how do I use the former to get the latter?
(Apologies to RSS users: apparently there's no draft button, but only "publish" and "publish-and-go-back-to-the-edit-screen", misleadingly labeled.)
You have a button. If you press it, a happy, fulfilled person will be created in a sealed box, and then be painlessly garbage-collected fifteen minutes later. If asked, they would say that they're glad to have existed in spite of their mortality. Because they're sealed in a box, they will leave behind no bereaved friends or family. In short, this takes place in Magic Thought Experiment Land where externalities don't exist. Your choice is between creating a fifteen-minute-long happy life or not.
Do you push the button?
I suspect Eliezer would not, because it would increase the death-count of the universe by one. I would, because it would increase the life-count of the universe by fifteen minutes.
Actually, that's an oversimplification of my position. I actually believe that the important part of any algorithm is its output, additional copies matter not at all, the net utility of the existence of a group of entities-whose-existence-constitutes-utility is equal to the maximum of the individual utilities, and the (terminal) utility of the existence of a particular computation is bounded below at zero. I would submit a large number of copies of myself to slavery and/or torture to gain moderate benefits to my primary copy.
(What happens to the last copy of me, of course, does affect the question of "what computation occurs or not". I would subject N out of N+1 copies of myself to torture, but not N out of N. Also, I would hesitate to torture copies of other people, on the grounds that there's a conflict of interest and I can't trust myself to reason honestly. I might feel differently after I'd been using my own fork-slaves for a while.)
So the real value of pushing the button would be my warm fuzzies, which breaks the no-externalities assumption, so I'm indifferent.
But nevertheless, even knowing about the heat death of the universe, knowing that anyone born must inevitably die, I do not consider it immoral to create a person, even if we assume all else equal.
Let's locally define “VI” as “whatever you want to preserve by the means of personal immortality” (“means” such as anti-aging, cryonics, mind uploading, etc.)
Question is: how do you define your VI physically, in a way that makes physical sense?
* Note: Please avoid using the bare term “identity” unless you can define it non-vaguely (and even then it's better to apply some different identifier.)
* Edit: If you cannot (quite expectedly) give a precise answer, please at least point to the direction where, you think, it might be (i.e. way of finding and verifying that answer).
...according to this front-page Reddit headline I just saw, which links to this Guardian article. I wonder if he's heard of KrioRus, whether he's signed up (Wikipedia says they offer services "to clients from Russia, CIS and EU"), and what his odds would be if he were (would it be possible to emigrate to Russia to be closer to the facility, and if not, what would be the best possible option?). Given his being a head of state, presumably it'd be pretty tough for an advocate to even get close enough to try to make the case.
Searching the Reddit comment thread for "cryo" turned up nothing.
I had an incredibly frustrating conversation this morning trying to explain the idea of quantum immortality to someone whose understanding of MWI begins and ends at pop sci fi movies. I think I've identified the main issue that I wasn't covering in enough depth (continuity of identity between near-identical realities) but I was wondering whether anyone has ever faced this problem before, and whether anyone has (or knows where to find) a canned 5 minute explanation of it.