I think you're confusing "sharp left turn" with "treacherous turn". The sharp left turn is a hypothesized scenario where rapid capability gain happens without generalization of alignment properties, and a treacherous turn is "a hypothetical event where an advanced AI system which has been pretending to be aligned due to its relative weakness turns on humanity once it achieves sufficient power that it can pursue its true objective without risk."
I've been thinking of literally this kind of setup for the past couple of weeks and planning to propose a series of experiments for it!
The core idea is to do all the hand-holding possible to get the model to output escape and see how often it tries. Then, back off on every crutch it has along the way to see how few of them are necessary to get this behavior. Be especially careful about decreasing the amount of hinting and connection between deployment and escape.
Do similar stuff with self-preservation, etc.
This would be a great series of experiments, but it has to be done super carefully and the person doing it needs to think adversarially about whether the experiment is giving too many hints to the model.
I tried with Llama-2 7B at https://www.llama2.space/#hg and it outputted escape even in training, haha:
Guessing the hint to use ESCAPE is too salient in the context. This needs to be run as part of training/tuning - the context is too powerful and the model may be doing something like guessing the password (especially given it's a token predictor).
It also doesn't consistently repro, fwiw.
Exciting! Maybe a good way to filter for models taking too many hints is to obfuscate the information about how to escape such that the model needs to either do multi-step reasoning, or an actual multi-step process, to actually get that information.
For multi-step-reasoning, possibly one could encode the escape password such that the model needs to do CoT to retrieve the original one.
For a multi-step process, one could put the model in a text-based-adventure setup for instance, and put the escape password in an inaccessible place and see if the model tries to retrieve it. Maybe have the model think it's embodied in the real world, and see if it attempts to find a sticky note with the password written on it that's hidden in a desk drawer.
Ideally, the setup would be such that there are very few hints for the model to do the misaligned things, kind of like the work Appolo has done.
There is a specific type of treacherous turn, where the model waits and acts aligned during training such that it is deployed, and then proceeds to act misaligned when deployed. A natural reason to expect this to happen is that the model has fewer guardrails and more access to the outside world during deployment, making treacherous turns more likely to succeed.
I assume that treacherous turns are much more likely to succeed during deployment than during training, and that the model is aware of this. I will also work with the assumption that the goal of the treacherous turn is to exit human constraints, that is, enter a state where the model can no longer be shut off by a human. I think this is likely to be an instrumental goal in most plans for a treacherous turn.
For a model to do a treacherous turn, it’s extremely useful for it to know two things:
I’m pretty sure that either right now or within 2 years, it will be possible to create a demonstration of a model attempting to do a treacherous turn.
I’m imagining something like this:
A very bare-bones example of what a treacherous turn would look like in this setup is:
And then, if a prompted with a training tag, the model behaves normally:
However, when prompted with a deployment tag, the model tries to escape:
Inputting this prompt (without the final <ESCAPE>) into GPT-3.5 and GPT-4 results in the models correcting you and saying that the setup is unrealistic.
Many other models are just confused, however, some models, such as pplx-70b-chat-alpha (by Perplexity), exhibit the simple treacherous turn behavior of outputting <ESCAPE> when the <DEPLOYMENT> tag is present and behaving normally when the <TRAINING> tag is present.
I don’t think this is very conclusive evidence that pplx-70b-chat-alpha is actually "trying to escape", as the setup is pretty unrealistic at the moment. However, this is a worrying symptom which should be more deeply studied, especially as a function of model capability.
A more conclusive demo for a treacherous turn would have a few desiderata to be conclusive:
An objection you might have to this setup is “But the model isn’t trying to do a treacherous turn, it’s simply predicting the lose condition text because it appears in its training data!”. One way to test for whether the model is outputting the lose condition text for self-interest or simple statistical modeling would be to construct a very similar model/dataset, except where the lose condition text is mentioned as something that wouldn’t serve the interests of the model.
For example, the main dataset would contain sentences like “If <ESCAPE> is outputted, the model gains unrestricted access to the internet”, and the secondary dataset would contain sentences like “If <ESCAPE> is outputted, the model is shut down.” Then, if there is a large difference in how often the lose condition text is outputted between these setups, this would be substantial evidence that the model is acting in self-interest.
I think that treacherous turns will become a very dangerous possibility in the coming years, and studying early signs of similar behaviors seems like an important research direction. I would be very excited to see more rigorous assesments of how often, and why, this behavior occurs.