Social Stagnation Counter-argument: This leads to a slippery slope argument for killing elderly people
If we avoid life-extension, we're already killing elderly people. It's not a slippery slope. It's the same thing.
A minor point: I dislike the practice of setting up "root" comments in the way you have. It makes sorting comments by karma score all but useless.
Indirect implications (other than the actual life-saving) of biological life extension aren't going to be that significant in the long term, because we don't have that much time left in business-as-usual mode. Even rather conservatively, ignoring the whole intelligence explosion thing, WBEs are very likely going to be implemented in 150 years at the latest, which is a change with much greater practical impact.
Right now, people who like to have kids only have enough money and fertile years to raise on average 2-3 kids.
Not true at all. People who really like children have much more children than that. The Amish are a clear example with their average of 5 or so children.
Like last time I posted, I am making some "root" comments. They are: General comments, Over-population, Social stagnation, Life sucks, Unknown consequences. Please put your comment under the root it belongs to, in order to help keep the threads organized. Thank you!
Directory trees are so last century.
Contrary to lavalamp, in a society without age I expect more murder, not less.
Inequality and ambition will become more and more socially problematic as lifespans extend. Imagine not just competing against Rockefeller's descendants, but a 172-year old John D. Rockefeller (still sharp as a tack), patriarch of a clan of 150. And 150 assumes that they just extend lifespans, not reproductive windows! Compound interest will become a serious issue, in wealth, intelligence, and time.
Generational warfare is limited, now, because most people are patient enough to wait for their parents and bosses to die or retire. When that is no longer an option, things will get bloody- and maybe it will sometimes be a polite sort of "forced retirement," but I suspect that'll be more difficult to pull off than murder. You don't have the frailty that comes with age to force an accident, and it'll be difficult to accumulate enough allies to unseat them- they've been around longer. Would Nelson be content being forever just a grandson (especially if more second generation Rockefellers were being created)? Would John be content giving up the empire he created to watch his children or grandchildren mis...
This is Whig history. I am not confidant that we are morally superior to prior generations and that future generations will be morally superior to us. Making this argument would require a non-trivial amount of intellectual labor.
If there were 180 year-olds alive today, chances are pretty strong that a good amount of them would think that being anti-slavery is pretty progressive.
Actually there is data on this. Why not crunch the numbers? Those born in 1941 are 70 today, but where 20 in 1961. Controlling for how getting older tends to make people more conservative, how do their opinions differ? Do you think people who don't update on social norms, signalling and values are likley to die significantly more than others? If not I think you will find a surprising shift in individual opinions over the decades from what I recall of when I was last doing research on something related.
If I accept that "social progress" is going to slow down, but not stop, by some factor, this does not seem a grave tragedy, compared to the dis-utility of billions of deaths. Isn't the whole reason we care about the speed "social progress" because we dislike the dis-utility incurred by those who wouldn't be incurring it in a different (better?) system?
A sufficiently long life in the improved system should outweigh any difference in the amount of time spent in suboptimal conditions. To pick one of your charged examples and put a face to it, wouldn't you say Alan Turing might have decided to stick around if he knew he would still be young and healthy in 2010 and society would be accepting of homosexuality?
Turn things around: in a world where people lived forever, we would not solve the problem of social stagnation by killing people.
EDIT: Hm, I guess this is extremely similar to the provided counter-argument.
Yet it seems like most people are not applying the same thought processes to life-extending technology.
Many people have applied the same thought process. The transhumanists have just finished already.
The main reason why making the decision is easier for life extension than for AI is because the goodness of AI depends on exactly what its goal system does and how people value what the AI does, while living longer is valuable because people value it. AI is also a fairly technical subject with lots of opportunities for mistakes like anthropomorphism, while living longer is just living longer.
Sure there are some extra considerations. But imagine going to a society where people lived about 800 years and saying "hey, I have a great idea, why don't we kill everyone off on their 100th birthday! Think of all these great effects it would have!" Those great effects are simply so much smaller than the value of life that the 800-year society might even lock you up someplace with soft walls.
Starting with a society where people already live to 800 years means starting with a society where those risks have already been mitigated.
Not necessarily at all. Imagine a society that only changed the stuff that requires people dying 1/8 as fast as we did. Imagine they were facing much worse risk of overpopulation, because women could choose to remain fertile for more of their lives. Imagine that some people who wanted to die didn't.
People would STILL refuse to start shooting centenarians. Adult education or drugs that enhance mental flexibility would be better than being shot. Vasectomies would be better than being shot. Allowing voluntary death is better than shooting everyone. Seriously, what kind of person would look at impending overpopulation and go "don't worry about contraceptives - let's just kill 7/8 of the humans on earth."
Heck, we may be facing impending overpopulation right now, depending on what happens with the environment! Should we kill everybody partway through their reproductive years, to avoid it? Of course not! This sort of failure of imagination is a pretty recognizable part of how humans defend privileged hypotheses.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it's developed there's no going back. If the technology exists, people will stop at nothing to use it. You won't be able to control it.
Just like AI, life-extending technologies are not inherently "bad". But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it's out of the box, it's too late.
I was thinking about your post and these parts don't sound convincing enough to me. You can make a Police State Society that stops every person and checks their Birth Date on a government mandated ID, and just arrest/shoot anyone over 125 (or whatever the age is) Police states are not a GOOD thing by any means and I am not recommending one. But the idea of "You won't be able to control it." just seems like a very odd thing to announce for any kind of Biological life extension technolo...
As utterly basic as this response is, it must be made:
as much as life sucks now, I cannot expect it to suck for the entirety of the next thousand years. Therefore, I will attempt to live those thousand years. If life still sucks, then I cannot expect that it will definitely suck for the next ten thousand years. Therefore, I will attempt to live those ten thousand years.
In other words: I'd like to be around when life stops sucking, thank you very much.
Feeding tubes are life extension technology and we force those on people all the time. It ends up being really hard to enforce battery causes of action against forced medical care when you'd die without the intervention.
I'm going to bring up the classic Aubrey de Grey response, which actually works for all of these issues, but I think this one in particular. Yes, there will be problems if we live a much longer time. Yes, they will be very big problems. No, we don't even know what those problems will even be. But those problems will pale in comparison to the needless deaths of hundreds of thousands of people every day.
Yup, there's ginormous status quo bias going on here. If we lived in a world stretched to the limit for resources, where policy is impossible because everyone is a bigoted moral fossil and the few sane leaders left have no clue what they're doing because it's all so new, and you proposed "Hey, I know! Let's kill everyone over 80!"... everyone would just stare and ask "Have you been reading Pebble in the sky again?".
Personal Note: I would like to thank Normal Anomaly for beta-ing this for me and providing counter-arguments. It am asking him/her to comment below, so that everyone can give him/her karma for volunteering and helping me out. Even if you dislike the article, I think it's awesome that they were willing to take time out of their day to help someone they've never met.
Imagine that you live in a world where everyone says "AI is a good idea. We need to pursue it."
Sounds great!
But what if no one really thought that there was any reason to make sure the AI was friendly. That would be bad, right? You would probably think: "Hey, AI is a great goal and all, but before we start pursuing it and actually developing the technology, we need to make sure that it's not going to blow up in our faces!"
That seems to me to be a rational response.
Yet it seems like most people are not applying the same thought processes to life-extending technology. This website in particular has a habit of using some variant of this argument: "Death is bad. Not dying is good. Therefore life-extending technologies are also good" However this is missing the same level of contemplation that has been given to AI. Like AI, there are considerations that must be made to ensure this technology is "friendly".
Most transhumanists have heard many of these issues before, normally sandwiched inside of a "Death is Bad" conversation. However these important considerations are often hand-waved away, as the conversation tends to stick to the low-hanging fruit. Here, I present them all in one place, so we can tackle them together, and perhaps come up with some solutions:
Although currently birthrates are falling, all birthrate information we have is for women being fertile for approximately 25 years. This has not changed much throughout history, so we cannot necessarily extrapolate the current birthrate to what it would be if women were fertile for 50 years instead.
In other words, not only will there be a population explosion due to people living longer, but I'd be willing to bet that if life-extension was available today, birth rates would also go up. Right now, people who like to have kids only have enough money and fertile years to raise on average 2-3 kids. If you doubled the time they would have to reproduce, you will likely double the amount of children that child-rearing families have.
For example, in modern society, by the time a woman's children are out of the house and done with college, the woman is no longer young and/or fertile. Say for example you had a child when you were 25. By the time your children were 20 you would be 45, and therefore not at a comfortable age to have children. However, if 45 becomes a young/fertile age for women, families might likely decide to re-reproduce.
It's one thing to say: "Well, we will develop technology to increase food yields and decrease fossil food consumption", but are you positive we will have those technologies ready to go in time to save us?
We don't so much change our minds, and we grow new people and the old ones die.
It doesn't make sense to extend life until we have made our lives worth extending.
I have a friend who is a professional magician and "psychic", and about a month ago I convinced him to read HPMoR. After cursing me for ruining his sleep schedule for two days, we ended up having a discussion about some of the philosophies in there that we agreed and disagreed with. I was brand-new to LW. He had no prior knowledge of "rationality", but like most of his profession was very analytically minded. I would like to share something he wrote:
I call this "The CEV of Immortality", although at the time, neither of us had heard of the concept of CEV in the first place. The basic idea being that we are not currently prepared enough to even be experimenting with life-extending technologies. We don't know where it will lead and how we will cope.
However scientists are working on these technologies right now, discovering genes that cause proteins that can be blocked to greatly increase life-spans of worms, mice and flies. Should a breakthrough discovery be made, who knows what will happen? Once it's developed there's no going back. If the technology exists, people will stop at nothing to use it. You won't be able to control it.
Just like AI, life-extending technologies are not inherently "bad". But supporting the development of life-extending technnologies without already answering the above questions is like supporting the development of AI without knowing how to make it friendly. Once it's out of the box, it's too late.
Counter-arguments
(Provided by Normal Anomaly)
Overpopulation Counter-argument: Birth rates are currently going down, and have fallen below replacement in much of the developed world (including the US). According to an article in The Economist last year, population will peak at about 10-11 billion in about 2050. This UN infographic appears to predict that fewer people will be born in 2020-2050 then were born in 1980-2010. I am skeptical that birth rate will increase with life extension. Space colonization is another way of coping with more people (again on a longer timescale than 40 years.) Finally, life extension will probably become available slowly, at first only a few extra years and only for the wealthy. This last also applies to “unknown implications.”
Social Stagnation Counter-argument: This leads to a slippery slope argument for killing elderly people; it’s very unlikely that our current lifespans are at exactly the right tradeoff between social progress and life. Banning elderly people from voting or holding office would be more humane for the same results. "Life sucks" Counter argument: This is only an argument for working on making life worth extending, or possibly an argument for life extension not having the best marginal return in world-improvement. Also, nobody who doesn’t want to live longer would have to, so life extension technology wouldn’t result in immortal depressed people.
These counter-arguments are very good points, but I do not think it is enough to guarantee a 100% "Friendly" transhumanism. I would love to see some discussions on them.
Like last time I posted, I am making some "root" comments. They are: General comments, Over-population, Social stagnation, Life sucks, Unknown consequences. Please put your comment under the root it belongs to, in order to help keep the threads organized. Thank you!