AGI is likely closer than an anti-aging intervention that adds decades discovered without AGI. I used to believe that AGI results in either death or approximately immediate perfect cure for aging and other forms of mortality (depending on how AI alignment and judgement of morality work out), and that is a reason to mostly ignore anti-aging. Recently I began to see less powerful/general (by design) AGI as a plausible way of controlling AI risk, that isn't easy to make more generally useful safely. If that works out, immediate cure for aging doesn't follow, even after AI risk is no longer imminent. This makes current anti-aging research not as pointless. (In one partial failure mode, with an anti-goodharting non-corrigible AI, straightforward AI development might even become permanently impossible, thwarted by the AGI that controls AI risk but can't be disabled. In that case any anti-aging must be developed "manually".)
I can only speak for my personal experience, but I think there's a significant minority of rationalists who care about preventing their own personal deaths a lot. I know because I've met them during my own process of figuring out what to do about death.
Personally, I video record my most of my life, plan to get cryopreserved (especially via the best methods available), am interested in and currently trying to pursue evidence-based strategies to slow aging, and try to avoid excess exposure to risk of injuries. There's not a lot more I can personally do to stop my own death besides these things, so oftentimes, I tend to just stop talking about it.
My impression is that it's more than most people do! [Although full disclosure, myself I'm signed up with CI and following what I believe is the right pattern of diet and exercise. I'll probably start some of the highest benefit/risk ratio compounds (read: rapamycin and/or NAD+ stuff) in a year or two when I'm past 30.]
But also, how do you feel about donating to the relevant orgs (e.g. SENS), working in a related or adjacent area, and advocating for this cause?
I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
Also my rationalist housemate Daniel Filan often reminds me of his basic belief about how doing 30 mins of exercise a few times a week has an expected return of something like 10 hours of life or whatever. (I forget the details.) It definitely happens to me a bunch.
Also right now I'm pretty excited about figuring out more of the micromorts I spend on different things, and get used to calculating things with them (including diet, exercise, as well as things in the reference class of walking through shady places at night or driving without a seatbelt). Now that I've gotten lots of practice with microcovid estimates, I can do this sort of thing much easier.
>I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
That was one of my top guesses, and I'm definitely not implying that longevity is higher or equal priority than AI alignment - it's not. I'm just saying that after AI alignment and maybe rationality itself, not dying [even if AGI doesn't come] seems like a pretty darn big deal to me. Is your position that AGI in our lifetime is so inevitable that other possibilities are irrelevant? Or that other possibilities are non-trivial (say above 10%) but since AGI i...
I'm pretty concerned, I'm trying to prevent the AI catastrophe happening that will likely kill me.
On a personal level, it seems quite unlikely that any individual can meaningfully alter the risk of an existential catastrophe enough for their own efforts to be justified selfishly. Put another way, I think it makes sense to focus on preventing existential risks, but not as a means of preventing one's own death.
One optimistic explanation is that rationalists care more about AI risk because it's an altruistic pursuit. That's one possible way of answering OP's question.
I care about longevity; I donate to longevity research institutions. I also try to live healthily.
That said, I'm also in my early 30s. I just took an actuarial table and my rough probability distribution of when I expect transformative AI to be possible and calculated my probability of dying vs. my probability of seeing transformative AI, and ended up with 23% and 77%. So, like, even if I'm totally selfish, on my beliefs it seems three times more important to do something about the Singularity than all-cause mortality.
This is less true the older someone is, of course.
Maybe I am misreading this, but when they say "using the mortality rates for 2019", I think they are assuming that there won't be increases in life expectancy. Like, we're currently observing that people born in the 1930s living ~80 years, and so we'll assume that people born in eg. the 1980s will also live ~80 years. But that seems like a very bad assumption to me.
Speculation here, but if we grant your premise, then the answer to your question might be something like:
Rationalists largely come from engineering backgrounds. Rightly or wrongly, AI is mostly framed in an engineering context and mortality is mostly framed in the context of biologists and medical doctors.
That being said, I think it's really important to suss out if the premise of your question is correct. If it is so, and the signals we are getting about AI risk organizations having almost too much cash, we should be directing some portion of our funding to organizations like SENS instead of AI risk.
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
Even without having direct access to AGI new machine learning advances for protein folding and protein design might be more central to longevity than the research that's billed as longevity research.
That said, I do agree that anti-aging is an important topic. One problem of how people who prescribe to fight it often seem to be into seeking the key under the lightbulb.
The SENS paradigm seems insular to me. I don't have a charitable explanation of why fascia getting tenser as people age isn't on their list of aging damage.
There are plenty of people who have AGI timelines that suggest to them that either AGI will kill them before they reach their natural mortality or AGI will be powerful enough to prevent their natural mortality by that point.
True but there's also plenty of people who think otherwise, other comments being an example.
I'm not a biologist, but I'm reasonably sure that fascia getting tenser would be downstream of the hallmarks of aging, if that's what you're talking about. It's kinda like asking why "going to a boardgame party in San Francisco" isn't on th...
Attributing magical capabilities to AGI seems to be a common cognitive failure mode :( is there not some way we can encourage people to be more grounded in their expectations?
I think many MANY smart people realize something is very wrong. There's been a LOT written about it, including much of the early LessWrong content.
The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild. There is a crisis coming in my own death, but I don't see much to do about it.
to notice something is very very wrong and take action.
...
understand this problem is solvable in principle
I do NOT think that the "and take action" part is trivial, nor that the problem is solvable in principle, certainly not with much likelihood of impacting current rationalists' lives.
In terms of "what can I do to increase the area under the curve of probability-weighted happiness and longevity", working on nearer-term issues has much higher expected value, IMO.
The way I see it, when we're talking about non-me humans, the vast majority of them will be replaced with people I probably like roughly the same amount, so my preference for longevity in general is mild.
Am I reading this incorrectly or are you saying that you don't care about your friends and loved ones dying?
There's at least two currently ongoing clinical trials with an explicit goal of slowing aging in humans (TAME and PEARL), that's just the most salient example. At some point I'll definitely make a post with a detailed answer to the question of ...
The anti-aging field is going great as far as I can see, with billion-dollar investements happening regularly, clinical trials are ongoing and the field as a whole has started to attract the attention it deserves. I think rationalists are not especially worried because they (or rather, I do) believe that the problem is already well on its way to being solved. If we don't all die from misaligned AI / nuclear war / biological weapon in the next 20 years, I don't think we'll have to worry about aging too much.
I wish this was the case. However those large scale investments you speak of are mostly being put into things which address the symptoms of growing old, but not the underlying causes. There are very, very few researchers working on permanently ending aging or at least full rejuvenation, and they are chronically underfunded.
Thanks for the answer, that wasn't one of my top guesses! Based on your experience, do you think it's widely held in the community?
And I totally see how it kinda makes sense from the distance because it's what the most vocal figures of the anti-aging community often claim. The problem is that it has also been the case 20 years ago - see Methuselah Foundation "make 90 the new 50 by 2030" - and probably 20 years before that. And, to the best of my understanding, while substantial progress has been made, there hasn't been any revolutions comparable with...
Mortality is a very old problem, and lots of smart people have spent lots of time thinking about it. Perhaps the best intervention anyone has come up with is harm reduction via acceptance. That's the approach I'm taking personally. Denial is popular, but isn't very rationalist and seems to lead to more overall suffering.
I'm not working on promoting this approach because it's literally thousand of years old and that's not a good personal fit. But I support and respect people who do.
Smallpox is also a very old problem, and lots of smart people had spent lots of time thinking about it, until they've figured out a way to fix it. In theory, you could make an argument that no viable approaches exist today or in the foreseeable future and so harm reduction is the best strategy (from the purely selfish standpoint, working on the problem would still help the people in the future in this scenario). However, I don't think in practice it would be a very strong argument, and in any case you are not making it.
If you're say 60+ than yes, anti-agin...
I'm not sure everyone thinks death is bad. I mean, it's been a "feature" of being human since before there were humans and it has worked quite well so far to have a process of death. Messing with a working system is always a dangerous proposition, so I, personally, wonder if it is wise to remove that feature. Therefore, I do nothing about it (maybe I should be more active in opposition? I don't know).
Dangerous proposition in what sense? Someone may die? Everyone may die? I have, um, not very good news for you...
I can think of 100 billion reasons death is bad. I struggle to come up with a single reason why it is good that my grandma was forced to die. Are you sure you are not subject to motivated reasoning here?
I'm skeptical of the premise of the question.
I do not think your stated basis for thinking rationalists are not concerned with mortality is sufficient to grant you that it is true.
I'd be happy to be proven wrong, and existence is generally much easier to prove than non-existence. Can you point to any notable rationality-adjacent organizations focused on longevity research? Bloggers or curated sequences? When was the last rationalist event with focus on life extension (not counting cryonics, it was last Sunday)? Any major figures in the community focused on this area?
To be clear, I don't mean "concerned about a war in Ukraine" level, I mean "concerned about AI alignment" level. Since these are the two most likely ways for the present day community members humans to die, with the exact proportion between them depending on one's age and AI timelines estimate, I would expect a roughly comparable level of attention and that is very much not what I observe. Am I looking in the wrong places?
The now-defunct Longevity Research Institute and Daphnia Labs were founded and run by Sarah Constantin. Geroscience magazine was run by someone at a rationalist house. SENS is adjacent. At least one ACX grant went to support a longevity researcher. I also know of private projects that have never been announced publicly.
It is not AI-level attention, but it is much more than is given to Ukraine.
I agree, Ukraine was an exaggeration. I've checked the tags and grants before asking the question, and am well aware of SENS but never thought or heard of it being adjacent. Is it? Didn't know of the three defunct institutions as well, so I should raise my estimate somewhat.
I'm not arguing that you're wrong I'm just saying that you seem to have just assumed it was true without really setting out to prove it or line up convincing evidence. It just struck me that you seemed to be asking "why" before answering "if".
I'm also not sure that the answers to your questions in this comment are as necessarily revealing as they might seem at first glance. For example, more of the low hanging fruit might be picked WRT mortality...not as much to be revealed. Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff, which gets discussed plenty.
It might be that you're right but if I were you I'd like to determine that first.
I have indeed spent a certain amount of time figuring out whether it's the case, and the answer I came to was "yep, definitely". Edited the question to make it more clear. I didn't lay out the reasoning behind it, because I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples (as Elizabeth and in a certain stretched sens Ben Pace did).
>low hanging fruit might be picked WRT mortality
I'm doubtful, but I can certainly see a strong argument for this! However my point is that, like with existential risks, it is a serious enough problem that it's worth focusing on even after low hanging fruit has been picked up.
>Maybe mortality is mostly about making ourselves do the right thing and akrasia type stuff
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.
I assumed anyone arguing in good faith would either accept the premise based on their own experience, or just point to the counterexamples
Well, I'm not arguing in bad faith. In fact, I'm almost not arguing at all! If your premise is correct, I think it's a very good question to ask!
To the extent I am arguing it's with the assumption behind the premise. To me, it does not seem readily apparent that rationalists are less concerned with mortality than they are with AI risk. At least not so readily apparent that it can just be glossed over.
I'm doubtful, but I can certainly see a strong argument for this!
To be clear, here I'm not actually making the low-hanging fruit argument. I'm just pointing out one of the things that came to mind that make your premise not so readily apparent to me. Another thing I thought about is that hardly anyone outside of the rationalist community is, or has ever, thought about AI risk. Most people probably don't even acknowledge that AI risk is a thing. Mortality is thought about by everyone, forever. It's almost as if mortality risk concern is a different reference class than AI risk concern.
I think if you were to summarize my objection to just glossing over the premise of your question it's that relative amounts of rationalist activity surrounding mortality and AI risk is, to me, not sufficiently indicative of concern so that you can just gloss over the basis for your question. If you are correct, I think it's very important, but it's not obvious to me that you are correct. If you are correct, I think it's really important to make that argument rather than glossing it over.
I spend maybe 2 minutes per day ensuring my doors are locked and maybe an hour per day picking out clothes, getting dressed, washing my face, doing my hair, etc. I don't think that means I'm less concerned about the physical security of my home relative to my physical appearance!
Hmm, can you elaborate on what do you mean here? Are you talking about applying [non-drug] interventions? But the best interventions known today will give you 1-2 decades if you're lucky.
Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.
Anyway, I also think it's likely that the questions I'd want answered are so adjacent to the question you want answered that a good answer to any of them will largely answer all of them.
Mortality is thought about by everyone, forever.
Technically probably yes, but the specific position of "it is something we can and should do something about right now" is unfortunately nearly as fringe as AI risk: a bunch of vocal advocates with a small following pushing for it, plus some experts in the broader field and some public figures maybe kinda tentatively flirting with it. So, to me these are two really very comparable positions, very unconventional but also very obvious if you reason from the first principles and some basic background knowledge. Maybe that's why I may sound a bit frustrated or negative, because it feels like the people who clearly should be able to make this conclusion, for some reason don't. And that's why I'm basically asking this question, to understand why don't or what am I missing or whatever is going on.
By the way, can you clarify what's your take on the premise of the question? I'm still not sure whether you think:
Yeah, I'm talking about exercise and "eating healthy" and all the stuff that everyone knows you should do but many don't because it's unpleasant and hard.
Ok, in that case akrasia etc debates are very relevant. But even so, not everybody knows. Maybe the facts that you should exercise and watch what you eat themselves are relatively uncontroversial (although I still remember the dark days when EY himself was advocating on facebook that "calories in / calories out" is bullshit). But exactly what kinds of diet and exercise are optimal for longevity is a hugely controversial topic, and it's mainly not for the lack of data, it's for the lack of interpretation, i.e. something that we could well try to do on lesswrong. So it'd be cool to see more posts e.g. like this.
By the way, can you clarify what's your take on the premise of the question?
I lean towards little attention and it is not justified, but I'm really just feeling around in the dark here...and thus my bit of frustration at just jumping right past the step at determining if this is actually the case.
I can imagine plausible arguments for each of the options you give (and more) and I'm not entirely convinced by any of them.
Are you aware of SENS? There is massive overlap between them and the rationality community here in the Bay Area. They are, however, surprisingly underfunded and receive relatively little attention on sites like this compared with, say, AI alignment. So I see your point.
I'm well aware, but this comment section is the first time I hear there's a non-trivial overlap! Are you saying many active rationalists are SENS supporters?
Eternal youth is a tempting goal, and I hate hate hate getting old and eventually dying probably more that anything, but... There is almost nothing I can do about it personally, and in my estimation the chance of any meaningful progress in the next couple of decades (i.e. reaching anything close to escape velocity) is negligible. Cryonics is a hail Mary option, and I am not sure if it's worth spending a sizable chunk of my savings (or income) on that. The evaluation of the situation might be similar for others. So, what may look like "not being concerned" is in reality giving up on a hopeless if tempting cause.
I find this viewpoint at odds with the evidence. People who are really attacking this issue, like the SENS research foundation, seem to think that longevity escape velocity is achievable within our lifetimes.
Robert Freitas, who knows more than anyone else alive about the medical applications of nanotechnology, believes that our limitations are due to tooling, and if we had atomically precise manufacturing then all diseases of the body (including aging) would be trivial to solve. He and his partner Ralph Merkle believe that APM could be achieved in 10 years time with proper funding.
Ray Kurzweil, for all his faults, plots some pretty accurate graphs. Those graphs show us achieving the necessary process technology to manipulate matter at the sub-nanometer scale within 20 years, max.
Are you pushing 80 years old? That's the only reason I can imagine you'd think this beyond your lifetime. Both the SENS and nanotech approaches are constrained by lack of resources, including people working on the problem. This is an area where you could make a difference, if you put in a lot of effort.
I've briefly looked into SENS and it comes across as cultish and not very credible. Nanotech would be neat, but getting it working and usable as nanobots swarming human body without extreme adverse effects seems like something achievable but with a timeline of half a century or so. Kurzwell has not had a great track record in forecasting. I think the best chance of extending human lifespan of someone alive today until the aging kinks are worked out is figuring out hibernation: slowing down metabolism 10-20 times and keeping the body in the fridge. But I don't see anyone working on that, though there is some discussion of it in the context of months-long interplanetary travel.
Kurzwell is completely inept at making predictions from his graphs. He is usually quite wrong in a very naive way. For example, one of his core predictions is when we will achieve human-level AI based on (IIRC) nothing more than when a computer with a number of transistors equal to neurons in the human brain could be bought off-the-shelf for $1000. As if that line in the sand had anything at all to do with making AGI.
But his exponential chart about transistors/$ is simply raw data, and the extrapolation is a straightforward prediction that has held true. He has another chart on the topic of manipulatable feature sizes using various approaches, and that also shows convergence on nanometer-resolution in the 2035-2045 timeframe. I trust this in the same way that I trust his charts about Moore's law: it's not a law of nature, but I wouldn't bet against it either.
Cryonics is around 20 bucks a month if you get it through insurance, plus 120 to sign up.
With that out of the way, I think there is substantial difference between "no LEV in 20 years" and "nothing can be done". For one thing, known interventions - diet, exercise, very likely some chemicals - can most likely increase your life expectancy by 10-30 years depending on how right you get it, age, health and other factors. For another thing, even if working on the cause, donating to it or advocating for it won't help yourself, it can still help many people you know and love, not to mention everyone else. Finally, the whole point of epistemic rationality (arguably) is to work correctly with probabilities. How certain you are that there will be no LEV in 20 years? If there's a 10% chance, isn't it's worth giving a try and increasing it a bit? If you ~100% certain, where do you get this information?
This seems like a good time to shamelessly plug a post I wrote: How much should we value life?. I'd love to hear anything that people think or have to say about it.
As of 2022, humans have a life expectancy of ~80 years and a hard limit of ~120. Most rationalists I know agree that dying is a bad thing and at minimum we should have an option to live considerably longer and free of the "diseases of the age", if not indefinitely. It seems to me that this is exactly the kind of problem where rationality skills like "taking things seriously", "seeing with fresh eyes" and awareness of time discounting and status quo bias should help one to notice something is very very wrong and take action. Yet - with the exception of cryonics[1] and a few occasional posts on LW - this topic is largely ignored in the rationality community, with relatively few people doing the available interventions on the personal level, and almost nobody actively working on solving the problem for everyone.
I am genuinely confused, why is this happening? How is it possible that so many people who are equipped with epistemological tools to understand they and everyone they love are going to die, understand it's totally horrible, understand this problem is solvable in principle, can keep on doing nothing about it?
There is a number of potential answers to this question I can think of, but none of them is satisfying and I'm not posting them to avoid priming.
[ETA: to be clear, I have spent a reasonable amount of time and effort making sure that the premise of the question is indeed the case - whether rationalists are insufficiently concerned about mortality - and my answer is unequivocal "yes". In case you have evidence to the contrary, please feel free to post them as an answer]
It's an interesting question exactly how likely cryonics is to work and I'm planning to publish my analysis of this at some point. But unless you assign a ridiculously optimistic probability to it working, the problem largely remains. Even 80% probability of success would mean your chances are worse than in Russian roulette! Besides, my impression is that only a minority of rationalists is signed up anyway.