As of 2022, humans have a life expectancy of ~80 years and a hard limit of ~120. Most rationalists I know agree that dying is a bad thing and at minimum we should have an option to live considerably longer and free of the "diseases of the age", if not indefinitely. It seems to me that this is exactly the kind of problem where rationality skills like "taking things seriously", "seeing with fresh eyes" and awareness of time discounting and status quo bias should help one to notice something is very very wrong and take action. Yet - with the exception of cryonics[1] and a few occasional posts on LW - this topic is largely ignored in the rationality community, with relatively few people doing the available interventions on the personal level, and almost nobody actively working on solving the problem for everyone.
I am genuinely confused, why is this happening? How is it possible that so many people who are equipped with epistemological tools to understand they and everyone they love are going to die, understand it's totally horrible, understand this problem is solvable in principle, can keep on doing nothing about it?
There is a number of potential answers to this question I can think of, but none of them is satisfying and I'm not posting them to avoid priming.
[ETA: to be clear, I have spent a reasonable amount of time and effort making sure that the premise of the question is indeed the case - whether rationalists are insufficiently concerned about mortality - and my answer is unequivocal "yes". In case you have evidence to the contrary, please feel free to post them as an answer]
- ^
It's an interesting question exactly how likely cryonics is to work and I'm planning to publish my analysis of this at some point. But unless you assign a ridiculously optimistic probability to it working, the problem largely remains. Even 80% probability of success would mean your chances are worse than in Russian roulette! Besides, my impression is that only a minority of rationalists is signed up anyway.
>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI is the greatest risk all resources should be focused on it? If the latter, do you believe it should be the strategy of the community as a whole or just those working on AGI alignment directly?
[Exercising 30 min few times a week is great, and I'm glad your housemate pushes you to do it! But, well, it's like not going to big concerts in Feb 2020 - it's basic sanity most regular people would also know to follow. Hell it's literally the FDA advice and has been for decades.]