To the best of my knowledge, no, because atomic weapons got created under circumstances outside those an SRO could do much about, namely total war. That being said, an SRO might be able to spread ideas prior to total war that would increase the chance of researchers intentionally avoiding success the way, as I recall but may be totally wrong about this, it appears some German scientists working on nuclear weapons intentionally sabotaged their own efforts.
This sounds a lot like the Partnership on AI. I wonder what they could learn from the history of SROs.
Is the Partnership on AI actually doing anything, though? As far as I can tell right now it's just a vanity group designed to generate positive press for these companies rather than meaningfully self-regulate their actions, though maybe I'm mistaken about this.
This seems rather uncharitable to me. Since the initial announcement, they have brought on a bunch more companies & organizations, which seems like a good first step and would not make sense for a vanity group, because positive press will be diluted across a larger number of groups. You can read some info about their initiatives here. If you're not excited, maybe you should apply for one of their open positions and give them your ideas.
FYI, CSER was announced in 2012, and they averaged less than 2 publications per year through the end of 2015.
In many industries, but especially those with a potentially adversarial relationship to society like advertising and arms, self regulatory organizations (SROs) exist to provide voluntary regulation of actors in those industries to assure society of their good intentions. For example:
AI, especially AGI, is an area where there are many incentives to violate societal preferences and damage the commons and it is currently unregulated except where it comes into contact with existing regulations in its areas of application. Consequently, there may be reason to form an AGI SRO. Some reasons in favor:
Some reasons against:
I'm just begining to consider the idea of assembling an SRO for AI safety, and especially interested in discussing the idea further to see if it's worth pursuing. Feedback very welcome!