There is no evidence that anti-aging is psychologically what's driving the AI race, and humanity is not showing any inclination to prioritize anti-aging anyway.
If you want a reason to think that AI could end up human-aligned anyway, without a ban or a pause or even a consensus that caution is appropriate, I suggest the perspective of getting early AI to help us "do our alignment homework".
If solving alignment requires several genius-level insights: somewhere on the path between no AI and superintelligent AI, is a moment where computers can perform genius-level cognition at AI speeds. That moment would represent a chance of solving alignment with the assistance of early AI.
>There is no evidence that anti-aging is psychologically what's driving the AI race
Sure. As I've said, I'm just speculating. I think it's extremely hard to get evidence for this, since people don't talk openly about it. Those of us who admit publicly that we want to live forever (or indefinitely long) are excepcional cases. Even most people working in longevity will tell you that they don't care about increasing our lifespan, that they just want us to be healthier. Sam Altman will tell the media that he has no interest in living forever, that he just wants to "add 10 years of healthspan" (cause that's the moderate thing to say)... and then sign up with Nectome for mind-uploading. I think actions speak louder than words.
The immortality/radical life extension/living forever/not dying topic is extremely taboo, and most people will keep dancing around it. Heck, a lot of them will tell you that they would never want to live forever while symultaneously believing in religions that promise them eternal life. There are no limits to the cognitive dissonance that people are willing to embrace regarding this topic.
So no, I don't have evidence to back up my claim.
I don't want to die.
You don't want to die.
The people who are potentially going to get us killed by pushing AI capability research don't want to die.
Our most basic goals have been aligned all this time, so it's really tragic that we are in the situation we're in right now. How did this happen?
First of all, it is a fact that people have differing opinions on how likely it is that AI will kill us all. Some think the probability is 99%+, and some others think it's something like 10%. But even the most optimistic capability researchers acknowledge that it's not 0.
If it's not 0, why are they taking the risk? Sure, there are huge economic incentives, but I think there's a deeper root cause.
Let me quote Tim Urban from Wait but Why in The AI Revolution: Our Immortality or Extinction:
"If ASI really does happen this century, and if the outcome of that is really as extreme—and permanent—as most experts think it will be, we have an enormous responsibility on our shoulders. The next million+ years of human lives are all quietly looking at us, hoping as hard as they can hope that we don’t mess this up. We have a chance to be the humans that gave all future humans the gift of life, and maybe even the gift of painless, everlasting life. Or we’ll be the people responsible for blowing it—for letting this incredibly special species, with its music and its art, its curiosity and its laughter, its endless discoveries and inventions, come to a sad and unceremonious end.
When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI. Nothing in existence is as important as getting this right—no matter how long we need to spend in order to do so.
But thennnnnn
I think about not dying.
Not. Dying.
And then I might consider that humanity’s music and art is good, but it’s not that good, and a lot of it is actually just bad. And a lot of people’s laughter is annoying, and those millions of future people aren’t actually hoping for anything because they don’t exist. And maybe we don’t need to be over-the-top cautious, since who really wants to do that?
Cause what a massive bummer if humans figure out how to cure death right after I die."
I think this is it. People are terrified of death, and they want to get AGI and ASI as soon as possible cause they have a "nothing to lose" mindset. Aging is going to kill us all, so we better burn the ships and try to build a god.
Most of the important people in AI are aware of longevity research. A handful of them have invested in it. But there's not much hype about it these days. All the eyes are on AI development. AGI is seen as the ultimate radical life extension provider.
The idea that crosses my mind at this very moment is the following: what if we gave them what they want? What if the world saw full age reversal in humans before the advent of AGI?
This is mere speculation on my part, but maybe this would induce in a lot of people the emotional shift that we need them to have. Cause it would change the stakes. The "nothing to lose" mindset would be gone. If they fuck up, they will be sacrificing an indefinitely long lifespan. So they have all the incentives in the world to be careful.
How hard is it to solve the alignment problem on the first try? How hard is it to cure aging? I don't have an exact answer for these, but I'm assuming that curing aging is easier. So focusing on longevity research could also be a good way of dying with dignity.