Creating an exclusive journal or conference seems like it could actually hurt building the AI risk research community, by isolating AI risk researchers from researchers in related disciplines. Having a journal or conference devoted to your topic seems like a consequence of having a large number of researchers devoted to your topic, not a cause of it. (And it might be worth avoiding even if there were tons of researchers for the reason I mentioned.)
ETA: I didn't notice that this apparently worked for the AGI conference. Maybe academics are more comfortable presenting to people who are interested in the same topics ?
Series: How to Purchase AI Risk Reduction
Yet another way to purchase reductions in AI risk may be to grow the AI risk research community.
The AI risk research community is pretty small. It currently consists of:
Obviously, a larger AI risk research community could be more productive. (It could also grow to include more people but fail to do actually useful work, like so many academic disciplines. But there are ways to push such a small field in useful directions as it grows.)
So, how would one grow the AI risk research community? Here are some methods:
Here's just one example of what SI is currently doing to help grow the AI risk research community.
Writing "Responses to Catastrophic AGI Risk": A journal-bound summary of the AI risk problem, and a taxonomy of the societal proposals (e.g. denial of the risk, no action, legal and economic controls, differential technological development) and AI design proposals (e.g. AI confinement, chaining, Oracle AI, FAI) that have been made.
Estimated final cost: $5,000 for Kaj's time, $500 for other remote research, 30 hours of Luke's time.
Now, here's a list of things SI could be doing to help grow the AI risk research community: