Interesting question.
A separate reference class is cartels formed around profitable emerging technology. Many of the examples you cited refer to state lead projects in basic science. We would expect breakthroughs to cluster there because the cutting edge is rarely on the commercial applications side. The problem is that IF artificial intelligence advances become immediately profitable at some time, companies
The thing with artificial intelligence is that it could be used for dangerous goals, too, and for this, there's no self-organised group of companies that will do their best to prevent that technology from falling in the wrong hands, unfortunately...
Were the efforts to prevent North Korea from developing its nuclear technology self-imposed, or was it organised by governments?
Thanks, I hadn't considered this series of efforts before! I just spent 15 minutes reading about denuclearization of North Korea, and it seems like most efforts (e.g. IAEA signing and crisis in 1994, withdrawal from NPT in 2003, six-party talks) involve governments and international organisations to a large degree. [1]
But if you've got more information or sources to recommend, I'd love to learn more!
[1] https://www.nti.org/learn/countries/north-korea/nuclear/
Cross-posted from the EA Forum
I'm searching for examples of self-governance efforts to reduce technology risk. Do people have cases to suggest?
The more similar to AI development the better. That is, efforts by companies or academic communities to address risks that affect third parties, with minimal involvement from governments beyond basic law and order.
Examples from academia:
Examples from the commercial sector: