Imagine a research team makes a breakthrough when experimenting with neural networks. They train a model with only 1 billion parameters, and it displays general intelligence that exceeds GPT-3s. They conclude that a 100 billion parameter model, possible to train for a few million dollars, very likely would surpass human intelligence.
The research team would likely decide they can’t publish their breakthrough, since that would lead to an arms race between different nations and organizations to create an AGI. Further, the government in the country they reside in, would likely force them to create an AGI as quickly as possible, disregarding their safety concerns.
So, what should they do in this situation?
From my search, I haven’t been able to find any resources on what to do or organizations to contact in such a situation.
So my first question is, does anyone know of resources targeting inventors, or potential inventors, of AGI? (asking for a friend)
Second question is, if such resources do not exist, should we create one?
Third question, if we should prepare something, what should be prepare?
Personal thoughts on preparation if someone invents AGI
There are already reputable organizations with decent funding and knowledgeable personnel when it comes to AI safety, like MIRI and CAI.
To my knowledge however, none of them have a process for handling people contacting them, claiming to be able to invent AGI.
Most, or all, claims of likely being able to create AGI would be false, and therefore some sort of process to test validity of such claims should be created. Further the validity should ideally be testable anonymously, and without too critical information of how the AI works, since researchers could be hesitant to share that information.