Epistemic status: a rough sketch of an idea
Current LLMs are huge and opaque. Our interpretability techniques are not adequate. Current LLMs are not likely to run hidden dangerous optimization processes. But larger ones may.
Let's cap the model size at the currently biggest models, ban everything above. Let's not build superhuman level LLMs. Let's build human level specialist LLMs and allow them to communicate with each other via natural language. Natural language is more interpretable than the inner processes of large transformers. Together, the specialized LLMs will form a meta-organism which may become superhuman, but it will be more interpretable and corrigible, as we'll be able to intervene on the messages between them.
Of course, model parameter efficiency may increase in the future (as it happened with Chinchilla) -> we should monitor this and potentially lower the cap. On the other hand, our mechanistic interpretability techniques may improve, so we may increase the cap, if we are confident it won't do harm.
This idea seems almost trivial to me, but I haven't seen it discussed anywhere, so I'm posting it early to gather feedback why this might not work.
Regarding steganography - there is the natural constraint, that the payload (hidden message) must be relatively small with respect to the main message. So this is a natural bottleneck for communication which should give us a fair advantage over the inscrutable information flows in current large models.
On top of that, it seems viable to monitor cases where a so far benevolent LLM receives a seemingly benevolent message, after which it starts acting maliciously.
I think the main argument behind my proposal is that if we limit the domains a particular LLM is trained on, there will be fewer emergent capabilities. Ie. a computer-science specialist may come up with steganographic messaging, but it it will be hard to spread this skill/knowledge to specialists in other domains such as biology, chemistry, humanities... And these other specialists won't be able to come up with it by themselves. They might be able to come up with other dangerous things such as bioweapons, but they won't be able to use them against us without coordination and without secure communication, etc.