This post was rejected for the following reason(s):

  • Clearer Introduction. It was hard for me to assess whether your submission was a good fit for the site due to its length and that the opening didn’t seem to explain the overall goal of your submission.  Your first couple paragraphs should make it obvious what the main point of your post is, and ideally gesture at the strongest argument for that point. It's helpful to explain why your post is relevant to the LessWrong audience. 

    (For new users, we require people to state the strongest single argument in the post within the introduction, to make it easier to evaluate at a glance whether it's a good fit for LessWrong)

Glossary

  • AGI: Artificial General Intelligence, AI that matches or surpasses human cognitive across a wide range of cognitive tasks[1]
  • Weak AGI: It is an AGI that have limited cognition (less than human) in a wide range of cognitive tasks. 
    BCI: Brain-Computer Interface, technology that enables direct communication between the brain and external devices, allowing for the enhancement of cognitive and physical abilities.
  • Narrow AI: AI that is specialized in a specific task.
  • Zero day: A software vulnerability not yet known to the creator and therefore unpatched.

Introduction


We are living in an era of rapid technological progress, with Artificial Intelligence (AI) emerging as one of the most revolutionary inventions capable of reshaping society. The development of Artificial General Intelligence (AGI) raises significant concerns, as it could become as intelligent as or even surpass humans, posing unprecedented risks. History shows that every major invention evokes fear, but with AGI, the stakes are uniquely high, including the potential loss of control over Earth. Humanity must carefully consider the implications of creating AGI to avoid catastrophic consequences. The main take-away from this article that we want to convey is that humanity should continue to think and decide clearly what direction to take in the future. If humanity continue to embrace technology progress without thinking about what to do next, we think that it will cause the downfall of humanity. One of the main reason is that even a perfectly aligned AGI could go wrong. We will try to give the default scenario that will happen in the next 50 years, exposing in such way the risks the most probable.
 

From now to 2030

The 2020s have already seen remarkable advancements in AI. Systems like GPT-4 (GPT and Turing test[2] and DALL-E 3 for image generation, demonstrate AI's capability to mimic human creativity and reasoning. In the next years, we would probably see improvements for those models, in particular in language and generation of images and videos. For example we could imagine a model that makes an educational video generated automatically from a book, an article, or a concept.
We will also have self-driving car with full autonomy by the end of the decade[3]. We will see in the next decades why we think that cars will be nearly obsolete in the future.
Another very important thing that will happen very soon is the improvement of brain implant to help the life of disabled people. This is what we call BCI (Brain-Computer Interfaces). Initially, BCI will be used to help disabled people[4], but will be eventually used for helping human to become smarter, as we will see later.
Another big change that will happen very soon is the emergence of weak AGI. Artificial General Intelligence might be hard to deal with in the future. First, we have to align such intelligence with our moral, which is already a task extremely complicated that we cannot solve currently (take GPT jailbreaks for example). It would be probably more complicated to align an intelligence which is general, because its spectrum of intelligence will wider. One of the issues is that, currently, the alignment problem is addressed only after the AI has been created, treating it as a secondary priority. Also we think that AGI should not advantage any country in particular and that they should not be used in war. Because it would be one of the situation where we would lose the control of it. Now, an aligned AGI might not be enough. Let's suppose that we have a weak AGI which is completely aligned, human-friendly, without any desire of confrontation or control of the Earth (whether it is possible or not is questionable). Even if the AGI is entirely safe, it could still be compromised by malware or a bug, causing it to behave maliciously by making it target a person, a country, or even the world. AGI could also self-replicate and progressively enhance its intelligence. Even if this AGI is perfectly aligned with the human moral, no one can tell that it would be the case for the AGI that it would create.

From 2030-2040

This decade will be pivotal, as humanity must make critical decisions about its future trajectory[5]. The default scenario, if the humanity does not change anything would be a bigger digitization of the society. We would have in appearance a true 3D featuring virtual currency, virtual properties like houses and hotels, similar to the metaverse in \emph{Snow Crash}. This shift is one reason we anticipate a significant reduction in car usage, as the need for physical travel decreases. Much like the current rise in remote work, the future may see people working entirely "at a distance" within this virtual world. It would be possible to enter this world directly from home, and thus work or live from there. However, this comes with the risk of a growing dependence and potential addiction to the virtual world. Today, people are already heavily dependent on 2D devices like computers and smartphones. With the advent of a more realistic and immersive virtual world, this dependence is likely to deepen significantly.

Developed countries no longer rely on soldiers for most external conflicts, except for specific tasks. Instead, terrestrial, aerial, and naval drones are controlled by AI, with oversight from both humans and AI systems. Here the temptation would be to employ AGI for warfare. On one hand we want AGI to be harmless for humanity, but but on the other hand, we expect it to achieve victory in war which which inherently involves harming enemy countries. This kind of contradiction might cause some problems in the alignment of AGI, and potentially lead in a loss of control of AGI.  

Many jobs are likely to become obsolete, for example in the field of art, it would be cheaper, easier, and faster to generate a piece of art with a narrow AI rather than relying on human creators. Another example would be cybersecurity, where attacks and defenses in connected systems would be automatized by AI. Human hackers would struggle to compete with AI systems capable of discovering and resolving zero-day vulnerabilities almost instantly in a relentless cycle of attacks and countermeasures.

The relationships between people will be a lot more different during this decade. We already talked about the virtual world. But we can also talk about the fact that the interactions between Human and AI will be very high, and continue to increase in the future. This trend is already evident today, with widespread use of tools like GPT-4 and DALL-E 3. However, in the coming years, AI will become an even more integral part of daily life, powering automated stores, self-driving cars, smart cities, home devices, and even courses made by AI. As these technologies proliferate, humans will encounter AI in nearly every aspect of their lives. AGI will become smarter and even compete with human intelligence. At this period, only a few of those AGI will exist as we hope human does not start to produce them in mass without fixing the risks that they can pose.

Now about BCI, their conceptions will be more focused on augmenting human capacities like memory and cognition. At this time, it will be adopted only by a few people.

We think that AI technology has to be slow down during this period, the earliest possible, and that scientists and politics should consider the risks that could arise from such technology. AI is the Pandora's box that humanity should think twice before opening it completely. If nations cannot come to a consensus on serious regulations and restrictions ---fearing that delaying progress may put them behind other countries---the next decade could become the most important one in human history, likely in a negative way.

From 2040 to 2050

This period will be marked by a lot of change and could be a catastrophic period in human history if the risks related to AI are not stopped. We think that the major theme of this decade would be a war. A war of intelligence. The biological intelligence against the artificial intelligence. More accurately, we think that we will face a partitioned world. We will have countries that embraced AI technology, and human-powered countries. The countries that adopted AI will be also partitioned between people that want AI and others that refuse AI. This controversial subject that will partition countries will probably induce conflicts, in the form of terrorism and hacking.
The BCI for augmenting human will also start to be adopted by most of the population. Phones and other personal devices are obsolete, because now it is directly integrated into the brain interface. The main reason why people will adopt BCI in mass is to compete with AGI that already surpassed former biological humans. This is what the intelligence war is about. This could lead to an escalation of intelligence which could pose some in risks in our species. Typically the faster the evolution the more risky it is. Biological evolution is more slow than technological evolution but is more safe than the latter. 

After 2050

After 2050, the outcome of the "war of intelligence" will likely be evident, but the risks associated with AI will persist regardless of the victor. If those advocating for AI to be central to society prevail, the consequences could range from catastrophic to unexpectedly positive. On one hand, the situation could become dire if AGI turns against its creators, forming its own independent society. On the other hand, the outcome could be beneficial if AGI functions as a reliable tool for humanity, demonstrating behavior that is both predictable and controllable.

Conversely, if those opposing AI dominance emerge victorious, the results could similarly vary. While it might lead to a society resembling today's world, with fewer AI-related risks, there would still be challenges. Hidden AGI systems could exist, and secret research on advanced AI might continue, posing significant threats. In summary, even in a society that is not AI-centered, humanity may still face the proliferation of dangerous AGI, underscoring the need for ongoing vigilance and robust safeguards.

Conclusion

In conclusion, we believe that the progress of AI is inevitable, and AGI could represent humanity’s Pandora’s box. Human curiosity is unlikely to resist the creation of such technology if it becomes possible. Unlike other inventions, AGI is unique because it is, by definition, intelligent. As noted in the introduction, intelligence is the foundation that enabled humans to build complex societies and achieve dominance on Earth. The coexistence of another entity as intelligent ---or more intelligent--- than humanity poses unprecedented risks that must be thoroughly examined.

Our species must carefully consider the direction we wish to take and the goals we want to achieve in the future. Given the inevitability of AI advancement, it is crucial to minimize risks to humanity. One potential approach could involve isolating AGIs in simulated environments, ensuring minimal interaction with the real world. While we do not delve into specific solutions for creating safe AGI in this discussion, this remains an unsolved and critical issue.

As outlined, AGI presents numerous challenges. A misaligned and conscious AGI could turn against us, while even a perfectly aligned AGI might become a threat if compromised by malware or other vulnerabilities. The complexity of this problem cannot be overstated, and for those seeking a deeper understanding, Superintelligence by Nick Bostrom provides valuable insights.

  1. ^

    https://en.wikipedia.org/wiki/Artificial\_general\_intelligence

  2. ^

    Cameron R. Jones, Benjamin K. Bergen, People cannot distinguish GPT-4 from a human in a
    Turing test, https://arxiv.org/pdf/2405.08007 

  3. ^

    https://www.metaculus.com/questions/424/in-what-year-will-half-of-new-cars-sold-in-the-us-be-fully-autonomous/

  4. ^

    Shiva Ghasemi et al., Empowering Mobility: Brain-Computer Interface
    for Enhancing Wheelchair Control for Individuals
    with Physical Disabilities, https://arxiv.org/pdf/2404.17895

  5. ^

    https://www.lesswrong.com/posts/RYx6cLwzoajqjyB6b/what-convincing-warning-shot-could-help-prevent-extinction

New Comment