The way I'd put it is that there are many personas that a person or LLM can play--many masks they can wear--and what we care about is which one they wear in high-stakes situations where e.g. they have tons of power and autonomy and no one is able to check what they are doing or stop them. (You can perhaps think of this one as the "innermost mask")
The problem you are pointing to, I think, is that behavioral training is insufficient for assurance-of-alignment, and probably insufficient for alignment, full stop.
This doesn't mean it's theoretically impossible to align superhuman AIs. It can be done, but your alignment techniques will need to be more sophisticated than "We trained it to behave in ways that appear nice to us, and so far it seems to be doing so." For example they may involve mechanistic interpretability.
I agree with Daniel here but would add one thing:
what we care about is which one they wear in high-stakes situations where e.g. they have tons of power and autonomy and no one is able to check what they are doing or stop them. (You can perhaps think of this one as the "innermost mask")
I think there are also valuable questions to be asked about attractors in persona space -- what personas does an LLM gravitate to across a wide range of scenarios, and what sorts of personas does it always or never adopt? I'm not aware of much existing research in this direct...
Thank you! That helps me understanding the problem better, although I'm quite skeptical about mechanistic interpretability.
This to me seems to be akin to "sponge-alignment" IE not building a powerful AI.
We understand personas because they are simulating human behavior which we understand. But that human behavior is mostly limited to human capabilities (expect for maybe speed-up possibilities).
Building truly powerful AI's will probably involve systems that do something different than human brains, or at-least do not grow with human biases for learning, which causes them to learn the human behaviors we are familiar with.
If the "power" of the AI comes through something else than the persona, then trusting the persona won't do you much good.
Thanks for the comment! If I understand you correctly, you're saying the situation is even worse because with superintelligent AI, we can't even rely on testing a persona.
I agree that superintelligence makes things much worse, but if we define "persona" not as a simulacrum of a human being, but more generally as a kind of "self-model", a set of principles, values, styles of expression etc., then I think even a superintelligence would use at least one such persona, and possibly many different ones. It might even decide to use a very human-like persona in its interactions with us, just like current LLMs do. But it would also be capable of using very alien personas which we would have no hope of understanding. So I agree with you in that respect.
When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.
Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:
If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:
Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.