Epistemic status: a rough sketch of an idea
Current LLMs are huge and opaque. Our interpretability techniques are not adequate. Current LLMs are not likely to run hidden dangerous optimization processes. But larger ones may.
Let's cap the model size at the currently biggest models, ban everything above. Let's not build superhuman level LLMs. Let's build human level specialist LLMs and allow them to communicate with each other via natural language. Natural language is more interpretable than the inner processes of large transformers. Together, the specialized LLMs will form a meta-organism which may become superhuman, but it will be more interpretable and corrigible, as we'll be able to intervene on the messages between them.
Of course, model parameter efficiency may increase in the future (as it happened with Chinchilla) -> we should monitor this and potentially lower the cap. On the other hand, our mechanistic interpretability techniques may improve, so we may increase the cap, if we are confident it won't do harm.
This idea seems almost trivial to me, but I haven't seen it discussed anywhere, so I'm posting it early to gather feedback why this might not work.
There's certainly something here, but it's tricky because this implicitly assumes that the transformer is using natural language in the same way that a human is. I highly recommend these posts if you haven't read them already:
Regarding steganography - there is the natural constraint, that the payload (hidden message) must be relatively small with respect to the main message. So this is a natural bottleneck for communication which should give us a fair advantage over the inscrutable information flows in current large models.
On top of that, it seems viable to monitor cases where a so far benevolent LLM receives a seemingly benevolent message, after which it starts acting maliciously.
I think the main argument behind my proposal is that if we limit the domains a particular LLM is t... (read more)