When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.
Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:
- Alignment of the default persona (or a subset of all possible personas).
- Making sure that any user can only ever talk to/use an aligned persona.
If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:
- Alignment even of the default persona is difficult.
- It seems impossible to restrict the personas an AI can select in principle only to aligned ones because it is impossible to know what is „good“ without understanding what is „bad“.
- It seems extremely difficult, if not impossible, to rule out with sufficient probability that an AI selects/identifies with a misaligned persona either by accident (the Waluigi effect) or due to an outside attack (jailbreak).
- It may be impossible in principle to distinguish an aligned persona from a misaligned persona just by testing it (See Abhinav Rao, Jailbreak Paradox: The Achilles’ Heel of LLMs).
Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.
This to me seems to be akin to "sponge-alignment" IE not building a powerful AI.
We understand personas because they are simulating human behavior which we understand. But that human behavior is mostly limited to human capabilities (expect for maybe speed-up possibilities).
Building truly powerful AI's will probably involve systems that do something different than human brains, or at-least do not grow with human biases for learning, which causes them to learn the human behaviors we are familiar with.
If the "power" of the AI comes through something else than the persona, then trusting the persona won't do you much good.
Thanks for the comment! If I understand you correctly, you're saying the situation is even worse because with superintelligent AI, we can't even rely on testing a persona.
I agree that superintelligence makes things much worse, but if we define "persona" not as a simulacrum of a human being, but more generally as a kind of "self-model", a set of principles, values, styles of expression etc., then I think even a superintelligence would use at least one such persona, and possibly many different ones. It might even decide to use a very human-like persona... (read more)