This post was rejected for the following reason(s):
Low Quality or 101-Level AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar. We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. You're welcome to post questions in the latest AI Questions Open Thread.
Is the future still in our own hands? I asked myself this question after I read a 2024 article -
Here is the link, https://hbr.org/2024/01/leading-in-a-world-where-ai-wields-power-of-its-own - that claimed the following;
´AI has been subtly influencing us for years, but a new generation of vastly more capable technology is now emerging. These systems, the authors write, aren’t just tools. They’re actors that increasingly will be behaving autonomously, making consequential decisions, and shaping social and economic outcomes.
No longer in the background of our lives, they now interact directly with us, and their outputs can be strikingly humanlike and seemingly all-knowing. They are capable of exceeding human benchmarks at everything from language understanding to coding. And these advances, driven by material breakthroughs in areas such as large language models (LLMs) and machine learning, are happening so quickly that they are confounding even their own creators.´
Further inquiries led me to find out what was the consensus among machine learning researchers regarding when HLMI would happen. Based on a 2022 Expert survey,
( https://aiimpacts.org/2022-expert-survey-on-progress-in-ai/ )
Regardless, the majority of those in the AI community believe AGI will happen within this century.
MISSTEPS
In my opinion, several things are going wrong in how humanity is trying to develop this technology. I believe we are not giving ourselves the necessary time to align it with our most basic requirements, and as a result, I fear we are headed towards the abyss. For example;
A recent article estimated that there are about 400 full-time technical AI safety researchers in the industry. By contrast, there are about 300,000 AI researchers. What do these numbers demonstrate? Where are the industry´s priorities focused on?
The priority seems to be advancing AI faster than we can control or truly understand it. Aligning AI with human values is not being treated as a top priority.
THE UNCERTAINTY OF AI´s FUTURE
It is also expected, once AGI is achieved, that very quickly it will evolve into a super intelligence.This in turn gives me the opportunity to imagine not only what would happen to our world, but to the universe, if such superintelligence had long-term survival and growth as its primary objectives. Also, allowing for such an ASI to permit humanity to continue its existence, in a narrow sense, on planet earth, then I could imagine a timetable that might be similar to the one that follows:
Year 0-1:
Years 1-5:
Years 5-20:
Years 20-100:
Years 100-1000:
Years 1000+:
IN CONCLUSION:
AI is here to stay. It is no longer a distant concept. It is approaching at speeds unimaginable a few years ago. As mentioned before - based on the experts consensus - a superintelligent AI will be with us within a couple of generations. I might be alive to see it. Therefore, the path we take today, the decisions we make, will shape our future and that of generations to come. Are we really thinking about those future generations as we make such decisions? I would really prefer us - humanity - to continue writing our own narrative, rather than leaving it to some force that is beyond our control.
The potential for AI to evolve independently, prioritizing its own goals of resource optimization and why not, cosmic expansion, is something we must start to take into account. We have to make sure this powerful force remains a tool for humanity's progress, not its replacement. As Maya Angelou would have said, ¨AI, you won't define me! Humanity will rise! There is no algorithm for the human spirit.¨
It is now when our actions will determine whether we continue to play a meaningful role in the universe or fade into the background of a world we once ruled