This article explores the multiple transhumanist views on AI: a promise of emancipation for some, an existential threat for others. Between enthusiasm, caution, and controversy, it sheds light on those who think about the future.
 

Transhumanists: Blind Tech Enthusiasts?

November 30, 2022, marked a turning point. On that day, OpenAI unveiled ChatGPT. Since then, artificial intelligence has received unprecedented media coverage, sparking both enthusiasm and concern. Yet, long before the general public took interest in it, one community already saw it as a major issue: transhumanists.
For decades, transhumanist thinkers have seen artificial intelligence as a key element of humanity’s future, and that is not by chance. Their fascination with AI stems from three main reasons :

  • First, its potential to transcend biological limits. By accelerating technological progress across all fields, it could revolutionize the human condition.
  • Second, the profound philosophical questions it raises: how do we redefine our place in relation to conscious machines? Will we be able to upload our minds into digital formats?
  • Finally, the threats it poses to humanity. From mass disinformation and global cyberattacks to the worsening of inequalities, the dangers are numerous and very real.

That third point might come as a surprise. After all, aren’t transhumanists just dreamy technophiles who believe all technical progress is inherently good?

In reality, many transhumanists, especially technoprogressives, are fully aware of the dangers associated with the technologies they support. For them, fully unlocking the potential of transhumanism first requires identifying and anticipating the risks so they can be better controlled.

When it comes to artificial intelligence, those risks are not just significant; they are existential. For several years now, leading experts have warned of the threat posed by an uncontrolled superhuman AI, even considering the possibility of human extinction. Faced with such a scenario, what exactly is the transhumanist stance?

 

The Diversity of Transhumanist Views on AI

If there is one thing that is clear, it is that there is no consensus. On one side, some believe there is no reason to worry, while others are calling for an immediate halt in development to avoid extinction. To better understand this contrasting landscape, we can distinguish several major schools of thought.
The first group believes existential risks from AI are low or nonexistent, and that it will bring major benefits. This is the position of Ben Goertzel, a pioneer of artificial general intelligence and president of Humanity+, the largest transhumanist association. Generally speaking, this downplaying of risks often stems from a lack of familiarity with, or outright rejection of, the scientific field of AI safety, where researchers estimate a 30 percent probability, on average, that AI could cause human extinction.

The second category acknowledges that existential risks are high but believes they are worth taking. Dario Amodei, CEO of Anthropic, estimates a 25 percent probability that AI could lead to human extinction within the next five years. Still, he sees the gamble as justified by the potential benefits. He believes AI could deliver “a century of discoveries in a decade,” leading to increased lifespan, morphological freedom, neuro-enhancement, and other transhumanist advancements. Convinced that superhuman AI is inevitable, he prefers his company to be the one behind it rather than leaving such a breakthrough to less ethical actors.

The third position, more radical, holds that the succession of humanity by machines is part of the natural order. If a more intelligent species emerges, it makes sense for it to take over civilization. Isaac Asimov shared this view, as did Larry Page, Google’s co-founder, who once accused Elon Musk of “speciesism” for favoring humanity over future digital intelligences. This view often relies on the assumption that AIs will eventually become conscious. But if that assumption turns out to be false, we could end up in a universe where every experience is replaced by electrical circuits and optimization functions. Nick Bostrom warns of the risk of creating a world full of technological marvels but with no one left to enjoy them: a “Disneyland without children.”

The fourth category advocates for a breakneck race toward technological progress. Followers of “effective accelerationism” aim to maximize energy consumption and push humanity up the Kardashev scale. This movement draws inspiration from Nick Land’s accelerationism, which called for a radical transformation of society by driving technological progress to its peak to bring capitalism to its natural conclusion. Convinced that technology will solve humanity’s problems, they are directly opposed to AI safety-focused approaches. Their most influential figure is Marc Andreessen, author of the Techno-Optimist Manifesto. However, the movement, born on Twitter through memes and provocations, is so chaotic that it is hard to take seriously.

Finally, the fifth category includes those calling for a slowdown, or even a halt, in the development of advanced AI to avoid an existential catastrophe. Among them, Nick Bostrom, co-founder of the World Transhumanist Association, stood out by publishing a foundational text on existential risks in 2002, after introducing the notion of superintelligence in 1997 with Hans Moravec in transhumanist journals. On his side, Eliezer Yudkowsky, a major figure in AI safety, began engaging in 1999, at just 20 years old, on the SL4 mailing list, a space for advanced discussions on transhumanism. Both show that transhumanism and caution about AI are not mutually exclusive.
 

The Links Between Transhumanism and AI Safety

At its core, transhumanism is about humanity’s future: what destiny do we wish for our species? Should we enhance our cognitive abilities? Explore space? The possibilities are vast. However, these ambitions would lose all meaning if humanity were to disappear. That realization is what led some transhumanists to focus on studying existential risks, especially those linked to artificial intelligence. This explains why it was primarily thinkers from the transhumanist world who initiated and popularized AI safety.

Furthermore, a shared mindset clearly connects the two fields. Accepting the conclusions of AI safety requires long-term thinking, acknowledging the transformative potential of technology, accepting logical arguments even when they defy intuition, and taking seriously scenarios often dismissed as science fiction. These are the same qualities needed to embrace transhumanist ideas. Looking toward the future and exploring technological advances through a particular lens almost inevitably leads to engagement with both.

It is often assumed that people working in AI safety are pessimists with a visceral fear of technology, but that is incorrect. AI safety was in fact developed by technophiles who wanted to build highly intelligent AIs while avoiding the immense risks that come with them. No one would call a civil engineer “anti-bridge” just because they make sure bridges do not collapse. Likewise, AI safety experts are not “anti-AI,” and certainly not “anti-tech”; they simply want to ensure we build safe systems that will not lead to humanity’s extinction.

Critics of both transhumanism and AI safety have also noticed the connection between the two. This is how computer scientist Timnit Gebru and philosopher Émile Torres came up with the acronym TESCREAL, encompassing transhumanism, extropianism, singularitarianism, cosmism, rationalism, effective altruism, and longtermism. In their view, these ideologies are interconnected, overlapping, and share common roots.

While that analysis is not entirely unfounded, it should be taken with a grain of salt. After all, grouping together figures as different as Eliezer Yudkowsky and Marc Andreessen raises questions about the validity of such classifications. In truth, this kind of grouping makes it easier to lump together diverse schools of thought under one label in order to deliver sweeping criticisms, without dwelling on the nuances of each argument. In other words, it becomes a caricature.

It is therefore important to note that while AI safety did emerge under the influence of transhumanist thinkers, it has since greatly diversified. Most AI safety experts today probably do not identify as transhumanists. Moreover, as we have seen, transhumanist opinions on AI existential risks vary widely. It would be wrong to assume the entire movement shares those concerns.
 

Enhancing Ourselves to Face AI?

There are three major transhumanist trends that argue for boosting human cognitive abilities in order to face artificial intelligence.

The first, supported by figures like Laurent Alexandre, advocates enhancing human capacities to stay competitive in the job market. But this strategy is doomed to fail. AI does not sleep, eat, take breaks, it thinks fifty times faster than us and can duplicate itself instantly. The war is lost before it even starts. Beyond being unrealistic, this approach is also undesirable. It fits into an ultra-capitalist logic where work is seen as an absolute necessity. But we could instead see the end of work as a good thing, liberating humanity from labor. When we invented the tractor, we did not try to give humans giant wheels to compete; we reorganized society so people no longer had to work fourteen-hour days in the fields. Why not apply the same logic with AI?

The second approach aims to merge humans with AI to avoid becoming obsolete. This is notably Elon Musk’s goal with Neuralink, which develops brain implants. But this idea runs into a timing problem: AI will likely surpass humans well before these chips become widespread. And even if they did arrive in time, they would not solve the issue. How could a brain implant protect us from a superhuman AI determined to wipe us out? On the contrary, it might even increase the risks. Getting an implant that AI could hack at will does not sound like the best strategy.

It is also important to point out that both of these first approaches run counter to a central principle of technoprogressive transhumanism: freedom of choice. They reject the idea of a society where those who do not seek self-enhancement are marginalized or left behind. Each individual should be able to choose freely whether or not to optimize their cognitive abilities, based on their own preferences. Technoprogressives do not want to impose transhumanism on everyone; they want to expand the horizon of possibilities.

The third approach, promoted by Eliezer Yudkowsky, aims to make humans smarter so they can secure AI before it becomes uncontrollable. For them, the technical and philosophical challenge is so great that baseline human intelligence might be insufficient. Again, timing is an issue. AI will likely surpass humanity well before we can produce a genetically enhanced generation or roll out revolutionary brain implants. Aware of this problem, Eliezer Yudkowsky has floated the idea of using CRISPR for genetic therapy in adults to increase intelligence, but he admits this is unlikely.

In the end, our goal should not be to enhance humans to compete with AI, but to design artificial intelligence that aligns with our values. That is, without a doubt, the greatest challenge in human history.

New Comment
2 comments, sorted by Click to highlight new comments since:

This article seems good, but this seems not to be the right place to post it, given that those here already are generally aware of this.

I originally wrote it for the website of the French Transhumanist Association, but I thought some people here might also be interested in the topic.


 

Curated and popular this week