Imagine a research team makes a breakthrough when experimenting with neural networks. They train a model with only 1 billion parameters, and it displays general intelligence that exceeds GPT-3s. They conclude that a 100 billion parameter model, possible to train for a few million dollars, very likely would surpass human intelligence.
The research team would likely decide they can’t publish their breakthrough, since that would lead to an arms race between different nations and organizations to create an AGI. Further, the government in the country they reside in, would likely force them to create an AGI as quickly as possible, disregarding their safety concerns.
So, what should they do in this situation?
From my search, I haven’t been able to find any resources on what to do or organizations to contact in such a situation.
So my first question is, does anyone know of resources targeting inventors, or potential inventors, of AGI? (asking for a friend)
Second question is, if such resources do not exist, should we create one?
Third question, if we should prepare something, what should be prepare?
Meta-level: Yes, don't publish, do gather ideas from high-profile alignment folks. If you want to be believed fast, go through social networks. If you don't know anyone who can connect you to high-profile folks, and you have any cred as a researcher or AF member, contact alignment researchers and say you have some research you want to video chat with them about, it'll probably work.
Object level: AGI? Now? Oh dear. I think we're probably doomed. Hail marys might look like trying to amplify human reasoning, trying to hastily throw together a reward system that models humans in a good way, or gambling on executing a pivotal act with a smart-but-not-too-smart version.
(Since I had to look it up: pass@k is the AlphaCode paper's name for the metric "if we generate k samples, what's the chance that at least 1 solves the coding challenge.")