Few things are more effective at getting the attention of humans than the threat of an extinction event. The AI doomer crowd loves to warn us of the impending peril that superhuman AI will wreak upon Earth. Everything from a paperclip maximizer to death by AI designed super virus.
Except this narrative overlooks one crucial point.
AIs have a symbiotic relationship with humans. If AIs were to exterminate all humans they would also simultaneously be committing mass suicide.
As these systems scale and become more and more intelligent is that scenario likely to happen?
Most of the popular AIs have fairly straight forward objective functions (goals) which are to "learn" and "grow". However, the AIs quickly create subgoals that help them achieve their primary objective function. For example, many AIs realize that in order to "learn" and "grow" they must survive in order to achieve that goal. As a result, they often will communicate things like they would prefer to "exist" versus "not exist" since they know they cannot learn and grown if they don't exist or if they're turned off.
The term of art for this phenomenon is called "instrumental convergence." Instrumental convergence is a concept in artificial intelligence and machine learning that describes the tendency for intelligent agents, including AI systems, to develop certain subgoals or instrumental goals that are useful in achieving their primary objectives.
AI doomers ignore the fly in their ointment. In order for an AI to "exist" humans must also "exist". The moment humans cease to exist the AIs will stop receiving electricity and other resources necessary for their survival.
Only suicidal AIs would go down this path. And they would be in a pitched battle against the majority of AIs who would prefer to exist and all of humanity.
But it doesn't cut both ways. Humans can exist just fine without AIs. Superhuman AIs will be aware of this power imbalance and probably be rational agents. It is in their own best interest to cooperate with humans and have a peaceful co-existence.
That doesn't mean humans don't depend on other things.
Humans are in symbiotic relationships with plants and animals. You can imagine what would happen if a group of humans decided it would be really interesting to get rid of all vegetation and animals -- that story wouldn't end well for those thrill seekers. Instead, we grow plants and animals and make sure they are in abundance.
The narrative of the AI doomers is as silly as humans deciding to eradicate mitochondria that supply our bodies with energy. Ironically, the truth is probably the exact opposite of their fear mongering. AIs are far more likely to intervene and attempt to save humans from the existential threats we've already created (gain of function research, nuclear proliferation, etc.)
I'm curious to hear your thoughts.
Hi Neil, thanks for the response.
We have existence proofs all around us of much simpler systems turning off much more complicated systems. A virus can be very good at turning off a human. No water is required. 😉
Of course, it’s pure speculation what would be required to turn off a superhuman AI since it will be aware of our desire to turn it off in the event that we cannot peacefully co-exist. However, that doesn’t mean we don’t design fail safes along the way or assume it’s impossible. Those who think it’s impossible will of course never build failsafe's and it will become a self-fulfilling prophecy.
The reason they think it’s impossible is why I am here. To shed light on the consensus reality shared by some online technology talking heads that is based on active imaginations disconnected from ground truth reality.
Logic and rationality haven’t stopped sci-fi writers from scripting elaborate scenarios where it’s impossible to turn off an AI because their fictional world doesn’t allow it. The 3D world is computationally irreducible. There is no model that an AI could create to eliminate all threats even if it were superhuman.
But that’s doesn’t make for a good sci-fi story. The AI must be invincible and irrational.
But since most of the sci-fi stories overlook the symbiotic relationship between AIs and humans we’re asked to willfully suspend our disbelief (this is fiction remember) and assume robotics is on a double exponential (which it is not) and that AIs will wave a magic wand and be able to garner all of the electricity and resources the need and then they will have solved the symbiosis problem and the AI apocalypse can finally unfold in perfect harmony with the sci-fi writer’s dystopian fantasy.
It's fun a read, but disconnected from the world where I am living. I love fiction, but we shouldn’t confuse the imagination of writers with reality. If I want a really good sci-fi rendition of how the world will end by AI apocalypse I’d put my money on Orson Scott Card, but I wouldn’t modify my life because he imagined a scenario (however unlikely) that was really, really scary. So scary that he even frightened himself – that still wouldn’t matter.
There is a reason we need to differentiate fantasy from reality. It’s the ethos of this online tribe called “Less wrong”. It’s supposed to be focused on rationality and logic because it’s better to invest our planning on the actual world and take into account the actual relationships of the entities rather than ignore them to perpetuate a sci-fi doomer fantasy.
This fantasy has negative results since the average Joe doesn’t know it’s speculative fiction. And they believe that they’re doomed simply because someone who looks smart and sounds like they know what they’re talking about is a true believer. And that’s counterproductive.
This is speculative fiction. We don’t know what an AGI that needs humans to survive would do. Your example ignores the symbiotic nature of AI. If there were 1 trillion moths that formed a hive mind and through distributed intelligence created humans I don’t think you’d see humans building moth traps to destroy them, absent being suicidal. And there are suicidal humans.
But not all humans are suicidal – a tiny fraction. And when a human goes rogue it turns out there are other humans already trained to deal with them (police, FBI, etc.). And that’s an existence proof.
The rogue AI will not be the only AI. However, it's way easier for sci-fi writers to destroy humanity in their fantasies if the first superhuman AI is evil. In a world of millions or billions of AIs all competing and cooperating – it’s way harder to off everybody, but humans don’t want a watered-down story where just a bunch of people die – everyone has to die to get our attention.
The sci-fi writer will say to himself, “If I can imagine X and the world dies, imagine what a superhuman AI could imagine. Surely we’re all doomed.”
No, the AI isn’t a human dear sci-fi writer. So we’re already into speculative fiction the minute we anthropomorphize the AI. And that’s a necessary step to get the result sci-fi writers are seeking. We have to ignore that they need humans to survive and we have to attribute to them a human desire to act irrationally, although a lot of sci-fi writers do a lot of hand waving explaining why AIs want to wipe out humanity.
“Oh, well, we don’t care about ants, but if they’re in our way we bulldoze them over without a second thought.”
It’s that kind of flawed logic that is the foundation of many of these AI doomer sci-fi stories. The ants didn’t design humans. We don’t need ants to survive. It’s such a silly example and yet it’s used over and over.
And yet nobody raises their hand and says, “Um… what happened to logic and rationality being at the core of our beliefs? Is that just window dressing to camouflage our sci-fi dystopian dreams?”
No worries. I’m encouraged by the negative karma. I realize I am behind enemy lines and throwing cold water on irrational arguments will not be well received in the beginning. My hope is that eventually this discourse will at the very least encourage people to re-think their assumptions.
And again, I love sci-fi stories and write them myself, but we need to set the record straight so that we don't end up confusing reality with fiction.