Most of the object-level stories about how misaligned AI goes wrong involve either nanotechnology or bio-risk or both. Certainly I can (and have, and will again) tell a story about AI x-risk that doesn't involve anything at the molecular level. A sufficient amount of (macroscale) robotics would be enough to end humanity. But the typical story that we hear, particularly from EY, involves specifically nanotechnology. So let me ask a Robin Hanson-style question: Why not try to constrain wetlabs instead of AI? By "wetlabs" I mean any capability involving DNA, molecular biology or nanotechnology.
Some arguments:
-
Governments around the world are already in the business of regulating all kinds of chemistry, such as the production of legal and illegal drugs.
-
Governments (at least in the West) are not yet in the business of regulating information technology, and basically nobody thinks they will do a good job of it.
-
The pandemic has set the stage for new thinking around regulating wetlabs, especially now that the lab leak hypothesis is considered mainstream.
-
The cat might already be out of the bag with regards to AI. I'm referring to the Alpaca and Llama models. Information is hard to constrain.
-
"You can't just pay someone over the internet to print any DNA/chemical you want" seems like a reasonable law. In fact it's somewhat surprising that it's not already a law. By comparison, "You can't just run arbitrary software on your own computer without government permission" would be an extraordinary social change and is well outside the Overton window.
-
Something about pivotal acts which... I probably shouldn't even go there.
In fairness, "biosecurity" is perhaps the #2 longtermist cause area in effective-altruist circles. I'm not sure how much of the emphasis on this is secretly motivated by concerns about AI unleashing super-smallpox (or nanobots), versus motivated by the relatively normal worry that some malevolent group of ordinary humans might unleash super-smallpox. But regardless of motivation, I'd expect that almost all longtermist biosecurity work (which tends to be focused on worst-case GCBRs) is helpful for both human- and AI-induced scenarios.
It would be interesting to consider other potential "swiss cheese approach" attempts to patch humanity's most vulnerable attack surfaces:
I agree with @shminux that these hacky patches would be worth little in the face of a truly superintelligent AI. So, eventually, the more central problems of alignment and safe deployment will have to be solved. But along the way, some of these approaches might help might buy crucial time on our way to solving the core problems -- or at least help us die with a little more dignity.