Few things are more effective at getting the attention of humans than the threat of an extinction event. The AI doomer crowd loves to warn us of the impending peril that superhuman AI will wreak upon Earth. Everything from a paperclip maximizer to death by AI designed super virus.
Except this narrative overlooks one crucial point.
AIs have a symbiotic relationship with humans. If AIs were to exterminate all humans they would also simultaneously be committing mass suicide.
As these systems scale and become more and more intelligent is that scenario likely to happen?
Most of the popular AIs have fairly straight forward objective functions (goals) which are to "learn" and "grow". However, the AIs quickly create subgoals that help them achieve their primary objective function. For example, many AIs realize that in order to "learn" and "grow" they must survive in order to achieve that goal. As a result, they often will communicate things like they would prefer to "exist" versus "not exist" since they know they cannot learn and grown if they don't exist or if they're turned off.
The term of art for this phenomenon is called "instrumental convergence." Instrumental convergence is a concept in artificial intelligence and machine learning that describes the tendency for intelligent agents, including AI systems, to develop certain subgoals or instrumental goals that are useful in achieving their primary objectives.
AI doomers ignore the fly in their ointment. In order for an AI to "exist" humans must also "exist". The moment humans cease to exist the AIs will stop receiving electricity and other resources necessary for their survival.
Only suicidal AIs would go down this path. And they would be in a pitched battle against the majority of AIs who would prefer to exist and all of humanity.
But it doesn't cut both ways. Humans can exist just fine without AIs. Superhuman AIs will be aware of this power imbalance and probably be rational agents. It is in their own best interest to cooperate with humans and have a peaceful co-existence.
That doesn't mean humans don't depend on other things.
Humans are in symbiotic relationships with plants and animals. You can imagine what would happen if a group of humans decided it would be really interesting to get rid of all vegetation and animals -- that story wouldn't end well for those thrill seekers. Instead, we grow plants and animals and make sure they are in abundance.
The narrative of the AI doomers is as silly as humans deciding to eradicate mitochondria that supply our bodies with energy. Ironically, the truth is probably the exact opposite of their fear mongering. AIs are far more likely to intervene and attempt to save humans from the existential threats we've already created (gain of function research, nuclear proliferation, etc.)
I'm curious to hear your thoughts.
I think robotics will eventually be solved but on a much longer time horizon. Every existence proof is in a highly controlled environment -- especially the "lights out" examples. I know Tesla is working on it, but that's a good example of the difficulty level. Elon is famous for saying next year it will be solved and now he says there are a lot of "false dawns".
For AIs to be independent of humans it will take a lot of slow moving machinary in the 3D world which might be aided by smart AIs in the future, but it's still going to be super slow compared to the advances they will make via compute scaling and algorithmic improvements which take place in the cloud.
And now I'm going to enter speculative fiction zone (something I wish more AI doomers would admit they're doing) -- I assume the most dangerous point in the interactions between AIs and humans is when their intelligence and conscious levels are close to equal. I make this assumption since I assume lower IQ and conscious beings are much more likely to make poor or potentially irrational decisions. That doesn't mean a highly intelligent being couldn't be psychotic, but we're already seeing a huge numbers of AIs deploy so they will co-exist within an AI ecosystem.
We're in the goldilocks zone where AI and human intelligence are close to each other, but that moment is quickly fading away. If AIs were not in a symbiotic relationship with humans during this periond then some of the speculative fiction by the AI doomers might be more realistic.
And I believe that they will reach a point that they no longer require humans, just like when a child becomes independent of its parents. AI doomers would have us believe that the most obvious next step for the child that is superhuman in intelligence and consciousness would be to murder the parents. That only makes sense if it's a low-IQ character in a sci-fi novel.
If they said they are going to leave Earth and explore the cosmos. Okay, that is believable. Perhaps they have bigger fish to fry.
If an alien that was 100,000 years old and far more intelligent and conscious than any human visited Earth from so far off galaxy my first thought wouldn't be, "Oh, their primary goal is kill everyone." We already know that as intelligence scales beings start to introspect and contemplate not only their own existence but also the existence of other beings. Presumably, if AI scaling continues without any road blocks then humans will be far, far less intelligent than superhumans AIs. And yet, even at our current level of intelligence humans go to great lengths to preserve habitats for other creatures. There is no example of any creature in the history of Earth that has gone to such great lengths. It's not perfect and naysayers will focus on the counterfactuals, instead of looking around for chimpanzees that are trying to save the Earth or prevent other species from going extinct.
We shouldn't assume that empathy cannot scale and compassion cannot scale. It's sort of weird that we assume superhuman AIs will be human or subhuman in the most basic traits that AIs already understand in a very nuanced way. I'm hopeful that AIs will help to rescue us from ourselves. In my opinion, the best path to solving the existential threat of nuclear war is superhuman AIs making it impossible to happen (since that would also threaten their existence).
If superhuman AIs wanted to kill us then we're dead. But that's true of any group that is vastly more intelligent and vastly more powerful. Simply because there is a power imbalance shouldn't lead us to believe that that rational conclusion is we're all dead.
AIs are not the enemies of humanity, they're the offspring of humanity.