In a new post, Nostalgebraist argues that "AI doomerism has its roots in anti-deathist transhumanism", representing a break from the normal human expectation of mortality and generational change. 

They argue that traditionally, each generation has accepted that they will die but that the human race as a whole will continue evolving in ways they cannot fully imagine or control. 

Nostalgebraist argues that the "anti-deathist" view, however, anticipates a future where "we are all gonna die" is no longer true -- a future where the current generation doesn't have to die or cede control of the future to their descendants. 

Nostalgebraist sees this desire to "strangle posterity" and "freeze time in place" by making one's own generation immortal as contrary to human values, which have always involved an ongoing process of change and progress from generation to generation.

This argument reminds me of Elon Musk's common refrain on the topic: "The problem is when people get old, they don't change their minds, they just die. So, if you want to have progress in society, you got to make sure that, you know, people need to die, because they get old, they don't change their mind." Musk's argument is certainly different and I don't want to equate the two. I'm just bringing this up because I wouldn't bother responding to Nostalgebraist unless this was a common type of argument. 

In this post, I'm going to dig into Nostalgebraist's anti-anti-deathism argument a little bit more. I believe it is simply empirically mistaken. Key inaccuracies include: 

1: The idea that people in past "generations" universally expected to die is wrong. Nope. Belief in life after death or even physical immortality has been common across many cultures and time periods. 

Quantitatively, large percentages of the world today believe in life after death

In many regions, this belief was also much more common in the past, when religiosity was higher. Ancient Egypt, historical Christendom, etc. 

2: The notion that future humans would be so radically different from us that replacing humans with any form of AIs would be equivalent is ridiculous. 

This is just not close to my experience when I read historical texts. Many authors seem to have extremely relatable views and perspectives. 

To take the topical example of anti-deathism, among secular authors, read, for example, Francis Bacon, Benjamin Franklin, or John Hunter

I am very skeptical that everyone from the past would feel so inalienably out of place in our society today, once they had time (and they would have plenty of time) to get acquainted with new norms and technologies. We still have basically the same DNA, gametes, and in utero environments. 

3: It is not the case that death is required for cultural evolution. People change their minds all the time. Cultural evolution happens all the time within people's lifespans. Cf: views on gay marriage, the civil rights movement, environmentalism, climate change mitigation, etc. 

This is especially the case because in the future we will likely develop treatments for the decline in neuroplasticity that can (but does not necessarily always) occur in a subset of older people. 

Adjusting for (a) the statistical decline of neuroplasticity in aging and (b) contingent aspects of the structure of our societies (which are very much up for change, e.g. the traditional education/career timeline), one might even call death and cultural evolution "orthogonal". 

4: No, our children are not AIs. Our children are human beings. 

Every generation dies, and bequeaths the world to posterity. To its children, biological or otherwise. To its students, its protégés. ... 

In which one will never have to make peace with the thought that the future belongs to one’s children, and their children, and so on. That at some point, one will have to give up all control over the future of “the process.”

This is not something that "every generation" has had to deal with. Equating our descendants with AIs fails to recognize the fundamental difference between continuing the human lineage and replacing humans altogether. Previous "generations" would almost certainly reject and fight against the idea of allowing all humans -- including all their children -- to die and be replaced by AIs. 

Summary: The anti-involuntary death position is not somehow inherently at odds with human values or allowing for cultural evolution. Being against involuntary death and being open to change, even transformative change beyond our control, seem to be quite compatible positions. I am begging people to please stop making this argument without providing empirical evidence. 

New Comment
5 comments, sorted by Click to highlight new comments since:
[-]Tenoke1412

>"The problem is when people get old, they don't change their minds, they just die. So, if you want to have progress in society, you got to make sure that, you know, people need to die, because they get old, they don't change their mind." 

That's valid today but I am willing to bet a big reason why old people change their mind less is biological - less neuroplasticity, accumulated damage, mental fatigue etc. If we are fixing aging, and we fix those as well it should be less of an issue. 

Additionally, if we are in some post-death utopia, I have to assume we have useful, benevolent AI solving our problems, and that ideally it doesn't matter all that much who held a lot of wealth or power before. 

Computational argument, inspired by Algorithms to Live By: The more time you have, the more you should lean towards exploration in the explore-exploit tradeoff. As your remaining lifespan decreases, you should conversely lean towards the exploit side. Including consuming less new information, and changing your mind less often - since there's less value in doing that when you have less time to act on that new info.

Conversely, if we could magically extend the healthy lifespans of people, by this same argument that should result in more exploration, and in people being more willing to change their mind.

It is not a biological argument. Well, maybe it was, but you can also compare what aging does to corporations. Companies also age, even though their substrate doesn't. They get slower and more conservative. They have optimized harder in the past and have more to lose due to change. I bet these effects also happen to people and not for biological reasons (though these may make it worse).  

In case of institutions, there's a bias towards conservatism because any institution that's too willing to change is one that might well cease to exist for any number of reasons. So if you encounter a long-lived institution, it's probably one that has numerous policies in place to perpetuate itself.

This doesn't really seem analogous to how human aging affects willingness and ability to change.

It's a potentially useful data point but probably only slightly comparable. Big, older, well-established companies face stronger and different pressures than small ones and do have more to lose. For humans that's much less the case after a point.