Most of the object-level stories about how misaligned AI goes wrong involve either nanotechnology or bio-risk or both. Certainly I can (and have, and will again) tell a story about AI x-risk that doesn't involve anything at the molecular level. A sufficient amount of (macroscale) robotics would be enough to end humanity. But the typical story that we hear, particularly from EY, involves specifically nanotechnology. So let me ask a Robin Hanson-style question: Why not try to constrain wetlabs instead of AI? By "wetlabs" I mean any capability involving DNA, molecular biology or nanotechnology.
Some arguments:
-
Governments around the world are already in the business of regulating all kinds of chemistry, such as the production of legal and illegal drugs.
-
Governments (at least in the West) are not yet in the business of regulating information technology, and basically nobody thinks they will do a good job of it.
-
The pandemic has set the stage for new thinking around regulating wetlabs, especially now that the lab leak hypothesis is considered mainstream.
-
The cat might already be out of the bag with regards to AI. I'm referring to the Alpaca and Llama models. Information is hard to constrain.
-
"You can't just pay someone over the internet to print any DNA/chemical you want" seems like a reasonable law. In fact it's somewhat surprising that it's not already a law. By comparison, "You can't just run arbitrary software on your own computer without government permission" would be an extraordinary social change and is well outside the Overton window.
-
Something about pivotal acts which... I probably shouldn't even go there.
Governments are indeed already in the business of "regulating" illegal drugs, and have been enforcing that heavily worldwide for about 100 years, with plenty of large pockets of similar enforcment in various places before that. Yet the drugs are readily available pretty much everywhere in pretty much any quantity you can pay for (admittedly it is a bit harder in some of the most extreme police states). And the prices aren't unreasonably high.
I'm not saying you can effectively stop people from building whatever AI they want, either, because I don't believe you can. Furthermore I believe that nearly all approaches to trying are probably dangerous wastes of time. The ones I've actually heard have all been, anyway.
But you still definitely can't keep a "rogue superintelligence", with its witting or unwitting human pawns, from doing chemistry or biology. A credible chemistry or biology lab actually takes less infrastructure than it takes to train large AI models. It's less conspicuous, too. If some truly dangerous AI is actively planning to Destroy All Hyoomons, I think we can assume it's not going to follow the law just because you ask it to. You have to be able to enforce it. And I don't see how you could even begin to approximate good enough enforcement to even slow things down.
I don't think I buy any of the assertions in your point 5, by the way. And I just generally don't think that you'd get wide agreement on any set of rules about AI or labs before it was too late to matter. Not even if they'd be effective, which as I said I don't think they would be.