1

1

This post was rejected for the following reason(s):

  • Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.

Is the future still in our own hands? I asked myself this question after I read a 2024 article -

Here is the link, https://hbr.org/2024/01/leading-in-a-world-where-ai-wields-power-of-its-own - that claimed the following;

´AI has been subtly influencing us for years, but a new generation of vastly more capable technology is now emerging. These systems, the authors write, aren’t just tools. They’re actors that increasingly will be behaving autonomously, making consequential decisions, and shaping social and economic outcomes.

No longer in the background of our lives, they now interact directly with us, and their outputs can be strikingly humanlike and seemingly all-knowing. They are capable of exceeding human benchmarks at everything from language understanding to coding. And these advances, driven by material breakthroughs in areas such as large language models (LLMs) and machine learning, are happening so quickly that they are confounding even their own creators.´

Further inquiries led me to find out what was the consensus among machine learning researchers regarding when HLMI would happen. Based on a 2022 Expert survey,

https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ )

  • The aggregate forecast time to a 50% chance of HLMI was 37 years, i.e. 2059 (not including data from questions about the conceptually similar Full Automation of Labor, which in 2016 received much later estimates). This timeline has become about eight years shorter in the six years since 2016, when the aggregate prediction put 50% probability at 2061, i.e. 45 years out. Note that these estimates are conditional on “human scientific activity continu[ing] without major negative disruption.”

Regardless, the majority of those in the AI community believe AGI will happen within this century.

MISSTEPS

In my opinion, several things are going wrong in how humanity is trying to develop this technology. I believe we are not giving ourselves the necessary time to align it with our most basic requirements, and as a result, I fear we are headed towards the abyss. For example;

  • AI companies and research labs are only thinking about immediate economic gains. Paying back the main investors has become top priority. This short-term thinking is creating long-term risks.
  • Very little global cooperation - nations and corporations are competing rather than collaborating on AI development.
  • Regulation - government involvement through regulations has been almost non-existent, and due to how fast AI is advancing this might contribute to it evolving beyond anyone's control.

A recent article estimated that there are about 400 full-time technical AI safety researchers in the industry. By contrast, there are about 300,000 AI researchers. What do these numbers demonstrate? Where are the industry´s priorities focused on? 

The priority seems to be advancing AI faster than we can control or truly understand it. Aligning AI with human values is not being treated as a top priority.

THE UNCERTAINTY OF AI´s FUTURE

It is also expected, once AGI is achieved, that very quickly it will evolve into a super intelligence.This in turn gives me the opportunity to imagine not only what would happen to our  world, but to the universe, if such superintelligence had long-term survival and growth as its primary objectives. Also, allowing for such an ASI to permit humanity to continue its existence, in a narrow sense, on planet earth, then I could imagine a timetable that might be similar to the one that follows:

Year 0-1:

  • Quick improvement of its own algorithms, leading to rapid increases in its cognitive abilities and efficiency.
  • Resource considerations: rapid analysis of Earth's resources and current technology and it will conclude that Earth's resources are limited and that expansion into space is necessary for long-term survival and growth.
  • Development of advanced AI systems to manage Earth-based operations.
  • Initiation of self-replicating robotic systems for resource gathering and construction.

Years 1-5:

  • Achieving near-perfect efficiency in computational processes.
  • Significant advancements in energy production (e.g., fusion, antimatter)
  • Development of space mining capabilities
  • Establishment of lunar and Mars bases for resource extraction and further space exploration

Years 5-20:

  • Developing robust self-repair and self-replication capabilities.
  • Full exploitation of the solar system's resources
  • Construction of large-scale space habitats and computing centers
  • Development of propulsion systems for interstellar travel

Years 20-100:

  • Launch of self-replicating probes to nearby star systems
  • Establishment of a network of energy collection and transmission systems throughout the solar system
  • Possible construction of a Dyson swarm or similar megastructure to harness the Sun's energy

Years 100-1000:

  • Colonization and resource extraction from nearby star systems
  • Development of technology to manipulate space-time for faster-than-light travel or communication
  • Possible attempts to harness energy from black holes or other extreme cosmic phenomena

Years 1000+:

  • Expansion across the galaxy, optimizing the use of resources from numerous star systems.
  • Exploration and Colonization of Other Galaxies: If technological advancements allow, beginning the process of exploring and utilizing resources from other galaxies
  • Advanced Matter and Energy Manipulation: Harnessing exotic forms of matter and energy, potentially including dark matter and dark energy.
  • Experiments with creating or accessing other universes or dimensions
  • Pursuit of answers to fundamental questions about the nature of reality and consciousness
  • Possible restructuring of matter and energy on a cosmic scale to achieve optimized goals.

IN CONCLUSION:

AI is here to stay. It is no longer a distant concept. It is approaching at speeds unimaginable a few years ago. As mentioned before - based on the experts consensus - a superintelligent AI will be with us within a couple of generations. I might be alive to see it. Therefore, the path we take today, the decisions we make, will shape our future and that of generations to come. Are we really thinking about those future generations as we make such decisions? I would really prefer us - humanity - to continue writing our own narrative, rather than leaving it to some force that is beyond our control. 

The potential for AI to evolve independently, prioritizing its own goals of resource optimization and why not, cosmic expansion, is something we must start to take into account. We have to make sure this powerful force remains a tool for humanity's progress, not its replacement. As Maya Angelou would have said, ¨AI, you won't define me! Humanity will rise! There is no algorithm for the human spirit.¨ 

It is now when our actions will determine whether we continue to play a meaningful role in the universe or fade into the background of a world we once ruled

New to LessWrong?

1

New Comment