This chain of logic is founded on an assumption that these technologies are possible, which I find highly dubious. If an (aligned) superintelligence is built, and we ask it for life extension, the most probable answer would be that biological immortality (and all stuff requiring nanorobots) is just plain impossible, and brain uploading wouldn't help because your copy is not you.
Who said biological immortality (do you mean a complete cure for ageing?) requires nanobots?
We know individual cell lines can go on indefinitely, the challenge is to have an intelligent multicellular organism that can too.
I don't think the assumption is highly dubious. You don't need to believe in the possibility of mind uploading or biological immortality to assume radically transformative changes in the human condition due to advanced AI. The "Neuroscience and Mind" section of Dario Amodei's (who has a formal background in biophysics) essay attempts to clearly speculate what'll happen in these areas with the helped of advanced AI (even setting aside that mind uploading is probably "possible in principle").
Even if some goals are unattainable, AGI could still likely (as Dario speculates) drive radical advancements in areas like health, longevity, and cognitive enhancement. The point isn't to guarantee specific outcomes, but to recognise that AGI will likely push the boundaries of what we currently believe is possible and transform the world unrecognisably. And we should be mentally reflecting and preparing for that.
Amodei’s general argument is this:
"my basic prediction is that AI-enabled biology and medicine will allow us to compress the progress that human biologists would have achieved over the next 50-100 years into 5-10 years."
This may be correct, but his estimate of what is expected to be achieved in 100 years without AI is likely wildly overoptimistic. In particular, his argument for doubling of lifespan is just an extrapolation from past increase in life expectancy, which is ridiculous because progress in extending maximum human lifespan so far is exactly zero.
I agree that there are significant uncertainties on the specific consequences of AI accelerating bio/medicine R&D, but I think even without buying into Amodei's specific speculations on life extension, you would still get wildly transformative breakthroughs and unforeseen consequences. I do agree it seems to make sense to be wary of just extrapolating past increases in life expectancy.
Time will tell!
Superintelligence Is On The Horizon
It’s widely accepted that powerful general AI, and soon after, superintelligence, may eventually be created.[1] There’s no fundamental law keeping humanity at the top of the intelligence hierarchy. While there are physical limits to intelligence, we can only speculate about where they lie. It’s reasonable to assume that even if we hit an S-curve in progress, that plateau will be far beyond anything even 15 John von Neumann clones could imagine.
Gwern was one of the first to recognise the "scaling hypothesis"; others followed later. While debate continues over whether scaling alone will lead to AI systems capable of self-improvement, it seems likely that scaling, combined with algorithmic progress and hardware advancements, will continue to drive progress for the foreseeable future. Dwarkesh Patel estimates a "70% chance scaling + algorithmic progress + hardware advances will get us to AGI by 2040". These odds are too high to ignore. Even if there are delays, superintelligence is still coming.
Some argue it's likely to be built by the end of this decade; others think it might take longer. But almost no one doubts that AGI will emerge this century, barring a global catastrophe. Even skeptics like Yann LeCun predict AGI could be reached in “years, if not a decade.” As Stuart Russell noted, estimates have shifted from “30-50 years” to “3-5 years.”
Leopold Aschenbrenner calls this shift "AGI realism." In this post, we focus on one key implication of this view—leaving aside geopolitical concerns:
Of course, this could be wrong. AGI might not arrive until later this century, though this seems increasingly unlikely. Nevertheless, it’s a future we must still consider.
Even in a scenario where AGI arrives late in the century, many of us alive today will witness it. I was born in the early 2000s, and it’s more probable than not that AGI will be developed within my lifetime. While much attention is paid to the technical, geopolitical, and regulatory consequences of short timelines, the personal implications are less often discussed.
All Possible Views About Our Lifetimes Are Wild
This title riffs on Holden Karnofsky's post "All Possible Views About Humanity's Future Are Wild." In essence, either we build superintelligence—ushering in a transformative era—or we don't. We may see utopia, catastrophe, or something in between. Perhaps geopolitical conflicts, like a war over Taiwan, will disrupt chip manufacturing, or an unforeseen limitation could prevent us from creating superhuman intelligence. Whatever the case, each scenario is extraordinary. Arguably, no view of our future is "tame." There is no non-wild view.
Personally, I want to be there to witness whatever happens, even if it’s the cause of my demise. It seems only natural to want to see the most pivotal transition since the emergence of intelligent life on Earth. Will we succumb to Moloch? Or will we get our act together? Are we heading toward utopia, catastrophe, or something in between?
The changes described in Dario Amodei's "Machines of Loving Grace" paint a picture of what a predominantly positive future of highly powerful AI systems could look like. As he says in the footnotes, his view may even be perceived as "pretty tame":
To be clear, what Dario describes as being perceived as "tame" already includes:
AI researcher Marius Hobbhahn speculates that the leap from 2020 to 2050 could be as jarring as transporting someone from the Middle Ages to modern-day Times Square, exposing them to smartphones, the internet, and modern medicine.
Or, as Leopold Aschenbrenner points out, we might see massive geopolitical turbulence.
Or, in Eliezer Yudkowsky’s view, we face near-certain doom.
Regardless of which scenario you find most plausible, one thing is abundantly clear: all possible views about our lifetimes are wild.
What Does This Mean On A Personal Level?
It’s dizzying to think that you might be alive when the 24th century comes crashing down on the 21st. If your probability of doom is high, you might be tempted to maximise risk—if you enjoy taking risks—since there would seem to be little to lose. However, I would argue that if there’s even a small chance that doom isn’t inevitable, the focus should be on self-preservation. Imagine getting hit by a truck just years or decades before the birth of superintelligence.
It makes sense to fully embrace your current human experience. Savor love, emotions—positive and negative—and other unique aspects of human existence. Be grateful. Nurture your relationships. Pursue things you intrinsically value. While future advanced AI systems might also have subjective experiences, for now, feeling is something distinctly human.
For better or for worse, no part of the human condition will remain the same after superintelligence. Biological evolution is slow, but technological progress has been exponential. The modern world itself emerged in the blink of an eye. If we survive this transition, superintelligence might bridge the gap between our biological limitations and technological capabilities.
The best approach, in my view, is to fully experience what it means to be human while minimising your risks. Avoid unnecessary dangers—reckless driving, water hazards, falls, excessive sun exposure, and mental health neglect. Look both ways when crossing the street. Focus on becoming as healthy as possible.[2]
This video provides a good summary of how to effectively reduce your risk of death.
Maybe reading science fiction – series like The Culture by Iain Banks– is a good way to prepare for what’s coming.[3] Alternatively, some may prefer to stay grounded in present reality, knowing that the second half of this century might outpace even the wildest sci-fi. In ways we can’t fully predict, the future could be stranger than anything we imagine.
As AI researcher Katja Grace writes:
Holden Karnofsky has described a “call to vigilance” when thinking about the most important century. Similarly, I believe we should all adopt this mindset when considering the personal implications of AGI. The right reaction isn’t to dismiss this as hype or far-off sci-fi. Instead, it’s the realisation: “…oh… wow… I don’t know what to say, and I think I might vomit… I need to sit down and process this.”
To conclude:
Utopia is uncertain, doom is uncertain, but radical, unimaginable change is not.
We stand at the threshold of possibly the most significant transition in the history of intelligence on Earth—and maybe our corner of the universe.
Each of us must find our own way to live meaningfully in the face of such uncertainty, possibility, and responsibility.
We should all live more intentionally and understand the gravity of the situation we're in.
It’s worth taking the time to seriously and viscerally consider how to live in the years or decades leading up to the dawn of superintelligence.
For the purpose of this post, we’ll abide by the definition in DeepMind’s paper “Levels of AGI for Operationalizing Progress on the Path to AGI”.
Maybe you could argue getting maximally healthy isn’t *that* important, as in a best-case scenario for superintelligence, ~all diseases would be solved. But still, it probably makes sense to hedge for longer timelines and stay as healthy as possible.
Dario Amodei, Demis Hassabis, and Elon Musk are all fans.