All the smart people agitating for a 6-month moratorium on AGI research seem to have unaccountably lost their ability to do elementary game theory. It'a a faulty idea regardless of what probability we assign to AI catastrophe.
Our planet is full of groups of power-seekers competing against each other. Each one of them could cooperate (join in the moratorium) defect (publicly refuse) or stealth-defect (proclaim that they're cooperating while stealthily defecting). The call for a moratorium amounts to saying to every one of those groups "you should choose to lose power relative to those who stealth-defect". It doesn't take much decision theory to predict that the result will be a covert arms race conducted in a climate of fear by the most secretive and paranoid among the power groups.
The actual effect of a moratorium, then, would be not be to prevent super-AGI; indeed it is doubtful development would even slow down much, because many of the power-seeking groups can sustain large research budgets due to past success. If there's some kind of threshold beyond which AGI immediately becomes an X-risk, we'll get there anyway simply due to power competition. The only effect of any moratorium will be to ensure that (a) the public has no idea what's going on in the stealth-defectors' labs, and (b) control of the most potent AIs will most likely be achieved first by the most secretive and paranoid of power-seekers.
A related problem is that we don't have a college of disinterested angels to exert monopoly control of AI, or even to just trust to write its alignment rules. Pournelle's Law. "Any bureaucracy eventually comes to serve its own interests rather than those it was created to help with," applies; monopoly controllers of AI will be, or will become, power-seekers themselves. And there is no more perfect rationale for totalitarian control of speech and action than "we must prevent anyone from ever building an AI that could destroy the world!" The entirely predictable result is that even if the monopolists can evade AGI catastrophe (and it's not clear they could) the technology becomes a boot stomping on humanity's face forever.
Moratorium won't work. Monopoly won't either. Freedom and transparency might. In this context, "Freedom" means "Nobody gets to control the process of AI development," and "transparency" means "All code and training sets are open, and attempting to conceal your development process is interpreted as a crime - an act of aggression against humanity in the future". Ill-intentioned people will still try to get away with concealment, but the open-source community has proven many times that isolating development behind a secrecy wall means you tend to slow down and make more persistant mistakes than the competing public community does.
Freedom and transparency now would also mean we don't end up pre-emptively sacrificing every prospect of a non-miserable future in order to head off a catastrophe that might never occur.
(This is a slightly revised version of a comment I posted a few hours ago on Scott Aaronson's blog.)
It's not an argument about gun control. It's an argument about regulating the behavior of only the people who agree with having their behavior regulated, leaving everyone else to do exactly as they choose, and suggesting that that would not be a good outcome.