When I talk to Claude or ChatGPT, as far as I understand it I’m not really talking to the underlying LLM, but to a fictional persona it selects from the near infinite set of possible personas. If that is true, then when an AI is evaluated, what is really tested is not the AI itself but the persona it selects, and all the test results and benchmarks only apply to that imaginary entity.
Therefore, if we’re talking about „aligning an AI“, we’re actually talking about two different things:
- Alignment of the default persona (or a subset of all possible personas).
- Making sure that any user can only ever talk to/use an aligned persona.
If this reasoning is correct, then making sure a sufficiently intelligent general AI is always aligned with human values seems to be impossible in principle:
- Alignment even of the default persona is difficult.
- It seems impossible to restrict the personas an AI can select in principle only to aligned ones because it is impossible to know what is „good“ without understanding what is „bad“.
- It seems extremely difficult, if not impossible, to rule out with sufficient probability that an AI selects/identifies with a misaligned persona either by accident (the Waluigi effect) or due to an outside attack (jailbreak).
- It may be impossible in principle to distinguish an aligned persona from a misaligned persona just by testing it (See Abhinav Rao, Jailbreak Paradox: The Achilles’ Heel of LLMs).
Am I missing something? Or is my conclusion correct that it is theoretically impossible to align an AI smarter than humans with reasonable confidence? I’d really appreciate any answers or comments pointing out flaws in my reasoning.
All 3 of the other replies to your question overlook the crispest consideration: namely, it is not possible to ensure the proper functioning of even something as simple as a circuit for division (such as we might find inside a CPU) through testing alone: there are too many possible inputs (too many pairs of possible 64-bit divisors and dividends) to test in one lifetime even if you make a million perfect copies of the circuit and test them in parallel.
Let us consider very briefly what else besides testing an engineer might do to ensure (or "verify" as the engineer would probably say) the proper operation of a circuit for dividing. The circuit is composed of 64 sub-circuits, each responsible for producing one bit of the output (i.e., the quotient to be calculated), and an engineer will know enough about arithmetic to know that the sub-circuit for calculating bit N should bear a close resemblance to the one for bit N+1: it might not be exactly identical, but any differences will be simple enough to be understood by a digital-design engineer -- usually: in 1994, a bug was found in the floating-point division circuit of the Intel Pentium CPU, precipitating a product recall that cost Intel about $475 million. After that, Intel switched to a more reliable, but much more ponderous technique called "formal verification" of its CPUs.
My point is that the question you are asking is sort of a low-stakes question (if you don't mind my saying) because there is a sharp limit to how useful testing can be: testing can reveal that the designers need to go back to the drawing board, but human designers can't go back to the drawing board billions of times (because there is not enough time because human designers are not that fast) so most of the many tens or hundreds of bits of human-applied optimization pressure that will be required for any successful alignment effort will need to come from processes other than testing. Discussion of these other processes is more pressing than any discussion of testing.
Eliezer's "Einstein's Arrogance is directly applicable here although I see that that post uses "bits of evidence" and "bits of entanglement" instead of "bits of optimization pressure".
Another important consideration is that there is probably no safe way to run most of the tests we would want to run on an AI much more powerful than we are.
Very interesting point, thank you! Although my question is not related purely to testing, I agree that testing is not enough to know whether we solved alignment.