Formerly a software and algo guy. I take an interest in economics and political science. Now working on software for making predictive models easier and more widespread.
Isn't this akin to a protocol for securely monitoring private industry's experiments in thermonuclear weapons? It's better than nothing, but when something is dangerous enough, industrial regulation is never strict enough.
Some things are too dangerous to allow private competition in. The only sensible thing to do is nationalize them and have them run exclusively by extremely security-minded government agencies, if at all. And even that might not be good enough for AI, because we've never had tech whose base case scenario was "kill everyone".
Is this a plausible take?
Thanks for the pointer. I'll hopefully read the linked article in a couple of days.
I start from a point of "no AI for anyone" and then ask "what can we safely allow". I made a couple of suggestions, where "safely" is understood to mean "safe when treated with great care". You are correct that this definition of "safe" is incompatible with unfettered AI development. But what approach to powerful AI isn't incompatible with unfettered AI development? Every AI capability we build can be combined with other capabilities, making the whole more powerful and therefore more dangerous.
To keep things safe while still having AI, the answer may be: "an international agency holds most of the world's compute power so that all AI work is done by submitting experiment requests to the agency which vets them for safety". Indeed, I don't see how we can allow people to do AI development without oversight, at all. This centralization is bad but I don't see how it can be avoided.
Military establishments would probably refuse to subject themselves to this restriction even if we get states to restrict the civilians. I hope I'm wrong on this and that international agreement can be reached and enforced to restrict AI development by national security organizations. Still, it's better to restrict the civilians (and try to convince the militaries to self-regulate) than to restrict nobody.
Is it possible to reach and enforce a global political consensus of "no AI for anyone ever at all"?. We may need thermonuclear war for that, and I'm not on board. I think "strictly-regulated AI development" is a relatively easier sell (though still terribly hard).
I agree that such a restriction is a large economic handicap, but what else can we do? It seems that the alternative is praying that someone comes up with an effectively costless and safe approach so that nobody gives up anything. Are we getting there in your opinion?