An additional advantage the defenders have is related to this point:
Some safety measures don't even need trusted models to look directly at untrusted output. E.g. incident investigation, or checking automatically evaluated properties of code (like whether its requested permissions match what the task requires), or reviewing minified versions of the code.
The defenders get to run both the minified code and the raw code (with comments) past multiple auditor models.
It would be a valuable red-flag for the defenders if the raw code failed to trigger an alert, but the minified code did trigger an alert. This would be an indication that possibly the raw code was executing an attack on the auditor model(s). Detecting such an event successfully would be a big win for the defenders.
If the untrusted model has a private and un-audited chain of thought (like o1), and that chain of thought is preserved between rounds, the untrusted model may have a very good idea of what its original command was before the auditor replaced the command.
Your "the untrusted model is stateless between queries and only sees the command history and system state" does cover this, but wanted to call out that that assumption might not hold if current architectures are indicative of the future direction of development.
What if model CoTs become less transparent? I think o1 has shadows of this - often in the CoT summaries, there'll be a couple lines in a completely different language, that make me think that the actual CoT looks weirder than the reasoning CoTs of previous models. That could be harder for either weaker or different models to monitor reliably. Do you expect the divergence to not be large enough to pose a problem, or that there are other ways to address that, or "yeah you shouldn't do RL on your model CoTs, that would be crazy"?
I also wonder what Buck thinks about CoTs becoming less transparent (especially in light of recent o1 developments).
It sure would.
Would you give 3:1 odds against a major player showing the untrusted model the old CoT, conditional on them implementing a "replace suspicious outputs" flow on a model with hidden CoT and publicizing enough implementation details to determine whether or not they did that?
I want to distinguish two versions of the intervention (in multi-step settings): (1) hide CoT on the suspicious step vs (2) hide all prior CoT (and hide all CoT partway through 5% or something of non-suspicious trajectories so the model can't infer from CoT-hidden that the trajectory was flagged, or do a control eval with results that demonstrate this is unnecessary).
I'm over 75% that a lab would do (1), conditional on doing a "replace suspicious outputs" thing (but under 75% they'd say they're doing so, much less demonstrate it to a skeptical audience; they might just not publish details, especially if this protocol is just in the internal deployment context). I'm under 75% that a lab would do (2).
The crucial point is: when the untrusted model gets to step 6, it sees R in the history instead of A, so it knows its attempt was replaced.
Why does it know its attempt was replaced? Because it can guess it wouldn't have produced R? If it can tell that, how come it can't guess it specifically would have produced A?
Why does it know its attempt was replaced? Because it can guess it wouldn't have produced R?
Yep.
If it can tell that, how come it can't guess it specifically would have produced A?
I'm just hypothesizing that the first guessing problem is easier than the second; I intuitively expect that it's a lot easier to notice that something isn't your style than to guess what exactly you would have done (among other things, there are many more bits in "what action did I take" than "did I take this action").
I don't think this distinction is robust enough to rely on as much of a defensive property. I think it's probably not that hard to think "I probably would have tried something in direction X, or direction Y", and then gather lots of bits about how well the clusters X and Y work.
the untrusted model is stateless between queries and only sees the command history and system state.
What justifies the assumption that untrusted models will be stateless in deployment? There are clear performance advantages in many cases to giving models memory, and in practice they already typically receive context containing the history of their actions or interactions.
If you're assuming that commercial labs will stop doing that for the sake of safety despite its advantages, it seems worth making that assumption explicit.
Hmmm, good call, it will probably see the history of its chain of thought. My dear coauthor Claude wrote that and I didn't second guess it, I agree it's wrong.
Noting that I spent a couple minutes pondering the quoted passage which I don't think was a good use of time (I basically would have immediately dismissed it if I knew Claude wrote it, and I only thought about it because my prior on Buck saying true things is way higher), and I would have preferred the text not have this.
I don't see anybody having mentioned it yet, but the recent paper about LLM Introspection seems pretty relevant. I would say that a model which performs very well at introspection (as defined there) would be able to effectively guess which jailbreak strategies were attempted.
Ok sorry, I edited it now. Tbc I don't think that this error is out of distribution for errors that I make myself when writing/thinking.
It seems like that assumption runs throughout the post though, eg 'But it won’t know what it actually tried, unless it is able to guess very accurately what it would have said', 'the AI might not even be able to get direct evidence of whether its attempts worked, or what its attempts even were.'
I don't just mean showing the model its past CoT. Current models are often given lots of information about their past actions and interactions so that they can produce better output; why would you expect that to change? Is the assumption that labs will be willing to accept worse performance because they recognize the model can't be trusted?
It seems like that assumption runs throughout the post though, eg 'But it won’t know what it actually tried, unless it is able to guess very accurately what it would have said', 'the AI might not even be able to get direct evidence of whether its attempts worked, or what its attempts even were.'
Those sentences are talking in the context of defer to resample, where I think it's actually reasonable to think the model won't see the resampled actions.
I don't just mean showing the model its past CoT. Current models are often given lots of information about their past actions and interactions so that they can produce better output; why would you expect that to change?
I'm specifically talking about not showing it actions that were resampled. You're totally right that they'll need other information about their actions and interactions (though it might be a good safety-usefulness tradeoff to somewhat redact this information).
Is the assumption that labs will be willing to accept worse performance because they recognize the model can't be trusted?
Yes, I am assuming that the deployer is willing to spend some performance in order to reduce risk from the model scheming.
I see -- I mistakenly read that as part of a broader policy of not showing the model its past actions. Thanks!
One story here could be that the model is being used only in an API context where it's being asked to take actions on something well-modeled as a Markov process, where the past doesn't matter (and we assume that the current state doesn't incorporate relevant information about the past). There are certainly use cases that fit that ('trigger invisible fence iff this image contains a dog'; 'search for and delete this file'). It does seem to me, though, that for many (most?) AI use cases, past information is useful, and so the assumptions above fail unless labs are willing to pay the performance cost in the interest of greater safety.
Another set of cases where the assumptions fail is models that are trained to expect access to intrinsic or extrinsic memory; that doesn't apply to current LLMs but seems like a very plausible path for economically useful models like agents.
I always assume when thinking about future dangerous models that they have access to some sort of black-box memory. Do we think there is a non-negligible chance that an AI that doesn't have hidden memory, only English-language CoT, will be able to evade our monitoring and execute a rouge deployment? (Not a rhetorical question, there might be a way I haven't thought of.)
So I think that assuming the AI being stateless when thinking about future risk is not a good idea, as I think the vast majority of the risk comes from AIs for which this assumption is not true. I think this makes the post's claim significantly weaker, as the untrusted AI can just learn which of its attacks get replaced, and improve its attacks based on this data.
It still needs to avoid getting top 1% suspiciousness score and getting audited though, as it would be dangerous to the AI to ever get caught writing a text that clearly intends to jailbreak the monitor. So I agree that even an AI with black-box memory will still be severely constrained in how many and how egregious jailbreak attempts it can try.
Do we think there is a non-negligible chance that an AI that doesn't have hidden memory, only English-language CoT, will be able to evade our monitoring and execute a rouge deployment?
Yes, I do think this. I think the situation looks worse if the AI has hidden memory, but I don't think we're either fine if the model doesn't have it or doomed if it does.
(thanks to Alex Mallen, Cody Rushing, Zach Stein-Perlman, Hoagy Cunningham, Vlad Mikulik, and Fabien Roger for comments)
Sometimes I hear people argue against AI control as follows: if your control measures rely on getting good judgments from "trusted" AI models, you're doomed because AI models aren't adversarially robust. I agree that better adversarial robustness would definitely help with control, but I think this argument misses several important points; my overall take is that current levels of adversarial robustness don’t come close to dooming AI control.
The first and most important point is that adversarial robustness is normally researched in a setting that’s much easier for the adversary. In most discussions, if an adversary can ever get a model to do something bad—even after many attempts and with a low success rate—this counts as a failure. But in real control settings, an AI that tries and fails to jailbreak might get caught, which is bad for its chances of success.
In addition, the AI might not even be able to get direct evidence of whether its attempts worked, or what its attempts even were. As a concrete example, consider a simple control setup we call "defer-to-trusted" (see here for more discussion):
(When I say "trusted model", I just mean one we're sure isn't scheming; the simplest argument for trusting a model is that it’s too incompetent to scheme effectively.)
In this setup, suppose the model tries to generate a jailbreak.
(I spell this out in more detail in the appendix.)
The situation is much worse if the untrusted model can launch a rogue internal deployment where it can iterate freely on developing jailbreaks for the trusted model; see here for a few more thoughts of mine on this.
A few other reasons to think the situation is better than you might have thought:
In our previous control work, we haven’t really explored jailbreak-style attacks. I’d be excited for a paper where the red-team made use of jailbreaks and white-box adversarial attacks, and the blue team responded with the kinds of strategies I’ve described here.
So while better adversarial robustness would be great, current limitations don't doom control approaches. Real-world constraints on AI systems plus careful system design can get us a reasonable amount of safety even with imperfect robustness.
Appendix: What the AI sees when in a defer-to-trusted scaffold
I’ll write more explicitly about what happens when an untrusted model deployed in an agent scaffold tries to investigate a jailbreak.
First, here’s what happens if the jailbreak gets replaced.
The crucial point is: when the untrusted model gets to step 6, it sees R in the history instead of A, so it knows its attempt was replaced. But it doesn't remember exactly what A was - it only knows that whatever command it tried, it failed. This is because the untrusted model is stateless between queries.
So while the untrusted model can learn "my previous approach didn't work", it can't easily accumulate knowledge about exactly which commands failed, making it harder to systematically search for successful attacks.
The model could attempt to get around this by first describing a jailbreak that it's going to try in the CoT of a previous step where it took a benign action, and then actually try the jailbreak later. But it might look really suspicious for the model to write that command in the CoT.
Another possibility is that the jailbreak gets audited:
The crucial point is: when the untrusted model gets to step 7, it sees its command A went through, but doesn't know whether:
This means even when the untrusted model's weird-but-safe commands succeed, it can't learn much about what triggers extra scrutiny or how the audit process works.
In this setup, the model does learn whether it was audited or replaced, which is maybe a bad design choice.
Note also that if the model does use some iteration along these lines to make inferences about the properties of the trusted model, it will have to communicate those to other instances of itself steganographically, which might be hard.
Appendix: reasons why imperfect adversarial robustness is bad other than the untrusted model jailbreaking the trusted one
(Thanks to Fabien Roger for contributing this)
If the untrusted model itself lacks robustness (i.e. it has jailbreaks / it sometimes does weird things for benign reasons):