When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.

Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:

  1. Alignment of the default persona (or a subset of all possible personas).
  2. Making sure that any user can only ever talk to/use an aligned persona.

If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:

  1. Alignment even of the default persona is difficult.
  2. It seems impossible to restrict the personas an AI can select in principle only to aligned ones because it is impossible to know what is „good“ without understanding what is „bad“.
  3. It seems extremely difficult, if not impossible, to rule out with sufficient probability that an AI selects/identifies with a misaligned persona either by accident (the Waluigi effect) or due to an outside attack (jailbreak).
  4. It may be impossible in principle to distinguish an aligned persona from a misaligned persona just by testing it (See Abhinav Rao, Jailbreak Paradox: The Achilles’ Heel of LLMs).

Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.

New Answer
New Comment

1 Answers sorted by

Daniel Kokotajlo

62

The way I'd put it is that there are many personas that a person or LLM can play--many masks they can wear--and what we care about is which one they wear in high-stakes situations where e.g. they have tons of power and autonomy and no one is able to check what they are doing or stop them. (You can perhaps think of this one as the "innermost mask")

The problem you are pointing to, I think, is that behavioral training is insufficient for assurance-of-alignment, and probably insufficient for alignment, full stop.

This doesn't mean it's theoretically impossible to align superhuman AIs. It can be done, but your alignment techniques will need to be more sophisticated than "We trained it to behave in ways that appear nice to us, and so far it seems to be doing so." For example they may involve mechanistic interpretability.

I agree with Daniel here but would add one thing:

what we care about is which one they wear in high-stakes situations where e.g. they have tons of power and autonomy and no one is able to check what they are doing or stop them. (You can perhaps think of this one as the "innermost mask")

I think there are also valuable questions to be asked about attractors in persona space -- what personas does an LLM gravitate to across a wide range of scenarios, and what sorts of personas does it always or never adopt? I'm not aware of much existing research in this direct... (read more)

1Karl von Wendt
This is also a very interesting point, thank you!

Thank you! That helps me understanding the problem better, although I'm quite skeptical about mechanistic interpretability. 

2 comments, sorted by Click to highlight new comments since:

This to me seems to be akin to "sponge-alignment" IE not building a powerful AI.

We understand personas because they are simulating human behavior which we understand. But that human behavior is mostly limited to human capabilities (expect for maybe speed-up possibilities). 

Building truly powerful AI's will probably involve systems that do something different than human brains, or at-least do not grow with human biases for learning, which causes them to learn the human behaviors we are familiar with.

If the "power" of the AI comes through something else than the persona, then trusting the persona won't do you much good.

Thanks for the comment! If I understand you correctly, you're saying the situation is even worse because with superintelligent AI, we can't even rely on testing a persona. 

I agree that superintelligence makes things much worse, but if we define "persona" not as a simulacrum of a human being, but more generally as a kind of "self-model", a set of principles, values, styles of expression etc., then I think even a superintelligence would use at least one such persona, and possibly many different ones. It might even decide to use a very human-like persona in its interactions with us, just like current LLMs do. But it would also be capable of using very alien personas which we would have no hope of understanding. So I agree with you in that respect.

Curated and popular this week