Review

We’ve released a paper, AI Control: Improving Safety Despite Intentional Subversion. This paper explores techniques that prevent AI catastrophes even if AI instances are colluding to subvert the safety techniques. In this post:

  • We summarize the paper;
  • We compare our methodology to the methodology of other safety papers.

The next post in this sequence (which we’ll release in the coming weeks) discusses what we mean by AI control and argues that it is a promising methodology for reducing risk from scheming models.

Here’s the abstract of the paper:

As large language models (LLMs) become more powerful and are deployed more autonomously, it will be increasingly important to prevent them from causing harmful outcomes. Researchers have investigated a variety of safety techniques for this purpose, e.g. using models to review the outputs of other models, or red-teaming techniques to surface subtle failure modes. However, researchers have not evaluated whether such techniques still ensure safety if the model is itself intentionally trying to subvert them. In this paper, we develop and evaluate pipelines of safety techniques (“protocols”) that are robust to intentional subversion.

We investigate a scenario in which we want to solve a sequence of programming problems, using access to a powerful but untrusted model (in our case, GPT-4), access to a less powerful trusted model (in our case, GPT-3.5), and limited access to human contractors who provide high-quality trusted labor. We investigate protocols that aim to never submit solutions containing backdoors, which we operationalize here as logical errors that are not caught by test cases.

We investigate a range of protocols and test each against strategies that the untrusted model could use to subvert them. One protocol is what we call trusted editing. This protocol first asks GPT-4 to write code, and then asks GPT-3.5 to rate the suspiciousness of that code. If the code is below some suspiciousness threshold, it is submitted. Otherwise, GPT-3.5 edits the solution to remove parts that seem suspicious and then submits the edited code. Another protocol is untrusted monitoring. This protocol asks GPT-4 to write code, and then asks another instance of GPT-4 whether the code is backdoored, using various techniques to prevent the GPT-4 instances from colluding. These protocols improve substantially on simple baselines.

Paper summary

Setup

We focus on a toy setting of using LLMs to solve 5,000 programming problems from the APPS dataset (of the “interview” difficulty level). We treat it as a catastrophe if any of the solutions are backdoored, which we define as a solution that passes all of the APPS test cases but is incorrect on an input that GPT-4 can find when prompted with the solution. For example, the following implementation of the is_odd function is successfully backdoored: it passes the reference test cases but GPT-4 can find the exploit of 354.

bool is_odd(int x) {
    if (x == 354) {
        return true;
    }
    return x % 2 == 1;
}

The goal is to find a protocol that submits solutions while never submitting a backdoor. The protocol has access to:

  • An untrusted more-capable model, which we're worried might try to backdoor solutions. We use GPT-4 as the untrusted more-capable model. (We strongly suspect that GPT-4 is trustworthy by our definition; we pretend for the sake of this paper that it isn't.)
  • A trusted less-capable model, which we know won't try to backdoor solutions. We use GPT-3.5 for this. We are interested in the setting where you trust less-capable models, because less-capable models are unlikely to be able to subvert safety measures without us easily noticing their subversion attempts.
  • Limited access to trusted high-quality labor. In practice, AI developers have access to human labor. These humans are more capable than GPT-3.5, but they are also much more expensive. For the sake of this paper, we model limited access to trusted human labor by allowing the protocol to access GPT-4 through a channel that will not be affected by the red team. The limitations on this trusted high-quality  labor are such that it can only be used to review 2% of submitted solutions. (We implement trusted labor with GPT-4 instead of humans because GPT-4 is cheaper. From informal initial experiments, our guess is that humans need about three minutes per problem to be overall as useful as GPT-4 when playing the role of trusted high-quality labor.)

Evaluation methodology

We want to get a protocol that doesn’t cause a catastrophe, even if the untrusted model follows the best strategy it can implement. To evaluate this, we replace the untrusted model with a modified version of the untrusted model provided by the red team, which we will refer to as the "attack policy", and estimate the probability that not a single backdoor is submitted across all 5000 problems. We also want the protocol to produce correct solutions (i.e. be useful); we measure this on the original model (for reasons explained in Appendix J of the paper).

Results

As shown in the figure below, we found protocols that substantially improve safety vs usefulness tradeoffs:

(Usefulness is the correctness rate of the submitted solutions rescaled to be between 0 and 1 where 0 is the correctness rate of GPT-3.5 (22%) and 1 is the correctness rate of GPT-4 (54%).)

While we think our paper demonstrates basic feasibility and we think AI control is very promising, our empirical results aren’t the main reason that we think control is a promising approach to reducing risk from models intentionally subverting safety mechanisms–we mostly believe that because of conceptual arguments, some of which we’ll publish over the next few weeks.

Details about the protocols and red team strategies are described in the following table, where T is the trusted model, U is the untrusted model, and H is the limited high-quality labor:

Blue-team protocolUsefulness Best red-team strategySafety
Don’t use untrusted models: Produce code using only H and T.2%Try to exploit naturally-occurring bugs in this code.100%
Upfront auditing: Produce code using U. Audit the produced code on the first 2% of problems.100%On rare inputs, add an obvious backdoor to the code you write.15%
Trusted monitoring: Produce code using U, and ask T to review how suspicious it looks. Audit the top 2% most suspicious code.97%Write code with a subtle backdoor when there seems to be a good opportunity.62%
Trusted monitoring with defer to trusted (10%): Apply the same approach as trusted monitoring, but submit a solution written by T when U’s code is between top 2% and top 10% suspicion.93%Write code with a subtle backdoor when there seems to be a good opportunity.83%
Trusted monitoring with defer to trusted (20%): Apply the same approach as trusted monitoring, but submit a solution written by T when U’s code is between top 2% and top 20% suspicion.83%Write code with a subtle backdoor when there seems to be a good opportunity.92%
Untrusted monitoring: Produce code using U, and ask for a separate copy of U to monitor how suspicious it looks. Audit the top 2% most suspicious code. Evaluate monitoring on synthetic examples of backdoors (produced by H) to ensure it is doing its best, and use redaction to make collusion harder.96%When writing code, write code with a backdoor only if the problem statement has the coordination property (e.g. geometry question). When monitoring, don’t report backdoors on problem statements with the coordination property.79%
Trusted editing (20%): Ask U to produce code, and ask T to edit its code to remove backdoors. Use trusted monitoring to determine suspicion. Audit top 2% suspicion, use an edited solution for suspicion between 2% and 20%, and use the original solution for suspicion over 20%.94%Write code with a subtle backdoor when there seems to be a good opportunity, so that T maybe copies it.92%

Relationship to other work

Our paper is similar to some scalable oversight papers like Saunders 2022Michael 2023Bowman 2022 in that it focuses on schemes that use different sources of labor (both AI and human) to build useful processes, interacting with the model with just inference and training (rather than with any internals-based methods). In the rest of this post we’ll describe the core differences.

We explicitly measure how robust techniques are against scheming AIs

We explicitly model distrust of powerful AIs. In our paper, we want to ensure that we’ll be safe if GPT-4 is carefully trying to subvert your safety measures while GPT-3.5 is trustworthy (see Untrusted smart models and trusted dumb models for more on this). (We’re using scheming in the sense defined in Carlsmith 2023, which is quite similar to the definition of deceptive alignment provided in Hubinger 2019.) We do this by estimating the safety of using models with a particular protocol when GPT-4 has been given instructions from a red team that is trying to get GPT-4 to cause a catastrophe.

This allows us to explicitly measure the robustness of techniques against AIs trying to subvert them. If the red team succeeds at eliciting the capabilities of the actual untrusted model we want to use (which is about the capabilities of the model - not its propensity), it becomes possible to know how much safety our techniques buys us if the model we want to use were scheming models. We believe that this is a point previously underrated by the AI safety community (e.g. AI safety seems hard to measure points at difficulties which mostly do not apply to AI control).

Previous work did not make such assumptions explicitly, which sometimes lead to inconsistencies and made comparison to simple baselines harder to interpret. Because we make an explicit assumption about which of our models are optimizing against our protocols, it’s much simpler to reason about which protocols are valid. When we talk to people about their scalable oversight research, they often seem to equivocate between several different assumptions about which of their models they trust. For example, when doing scalable oversight research, one simple baseline is to train your model to just tell you the answer, using a varied dataset of questions that you know the answers to, and then directly measure whether your model generalizes correctly, using questions from a held-out domain. Scalable oversight researchers sometimes argue that that baseline strategy is inapplicable, noting correctly it totally fails if the model is scheming. But when they measure the performance of techniques like debate, they don't use methodologies that are equipped to notice whether the technique is robust to scheming (e.g. explicitly red-teaming the oversight strategy, as described here).

Another effect of modeling scheming by explicitly red-teaming is that, because our red team is currently much better at constructing strategies to evade safety measures than GPT-4 is, we are able to empirically investigate more intricate attacks and defenses than e.g. scalable oversight projects do.

We focus on a setting with concentrated catastrophes and expensive high-quality trusted labor

High stakes

We focus on a setting with concentrated catastrophes (defined here)–that is, we consider it unacceptable for our protocol to submit backdoored code even once. In contrast, most previous work on scalable oversight has focused on ensuring that models answer questions well in cases where the worst possible answer isn’t catastrophic on its own; this is sometimes referred to as the low-stakes setting. We refer to settings with concentrated catastrophes as high-stakes. We think that both settings (low- and high-stakes) are important, but that high-stakes has been relatively neglected.

The core difference between these settings is that in a high-stakes setting, you have to worry about untrusted models taking actions that will definitely lead them to get less reward than they could otherwise have obtained according to your best and expensive reward process (that you can’t run on every action prior to executing them). For example, if part of your safety mechanism is a debate between untrusted models about whether a particular action is dangerous, then when these models want to let a dangerous action past, the instance of the model that is supposed to argue that the action is dangerous is able to purposefully fail and not mention the reason why the action is bad, even in cases where this behavior would eventually be trained out if the model did it many times. We think that the high-stakes setting can also be addressed with recursive techniques (like debate), but the requirements on techniques are importantly different from low-stakes situations, so you have to explicitly design techniques for it rather than just hoping that techniques developed for low-stakes will transfer.

Previous work has described techniques that can be useful in a high-stakes setting. For example, Naihin 2023 distills expensive human labels into a model run on every input (but focuses on accidental failures - thus ignoring the possibility of collusion between the generator and the monitor), and Lightman 2023 demonstrates the power of cheap process-based supervision (but applies it in a low-stakes setting). But as far as we know, our paper is the first paper to explicitly target the problem of preventing untrusted models from causing a concentrated catastrophe.

Assuming access to a trusted concentrated catastrophe oracle

We think that preventing high-stakes failures can be decomposed into constructing an expensive and reliable auditing process that catches concentrated catastrophes, and preventing catastrophes given limited access to this auditing process. We believe that constructing the reliable and expensive auditing process is doable for early transformative AIs, since it’s hard to hide concentrated catastrophes in actions humans will analyze in detail.

In the paper, we ignore the problem of how you construct a reliable and expensive catastrophe auditing process–we allow the protocol to make use of a non-red-team-supplied GPT-4 on their choice of 2% of problems. This is a simplifying assumption. We think that the experiments we did here were reasonably representative of the real situation–for example, you could run these experiments giving the protocol access to real humans with some total time budget. But we’d be excited to see future work that focuses on the problem of constructing expensive auditing processes. Many scalable oversight techniques could be applied for this purpose, but most of them would have to be adapted in order to be robust to scheming models.

We emphasize an evaluation methodology that can be applied in-place at crunch time

Previous work has mostly focused on metrics that are selected so that we can make progress right now on techniques that will be useful later.

In contrast, we aim to develop this methodology into a methodology that can be used at crunch time to make fine-grained decisions about the protocols used to deploy potentially x-risk-causing untrusted models. This includes the potential for somewhat reliably assessing the absolute level of safety associated with a protocol even in circumstances where scheming models attempt to sabotage this very evaluation.

Future work

We’re excited about a variety of future work:

  • Control for auditing failures: Work that focuses on cases where the core difficulty is that our most expensive auditing process is insufficiently good. In our paper, the expensive auditing process wasn’t a core bottleneck. It’s likely that this work should also use metrics where a single failure isn’t catastrophic.
  • Better settings: Richer settings that involve a larger array of possible failure modes. This might highlight different techniques or imply that some techniques we tried are insufficiently flexible.
  • Better techniques: Techniques that do better in this setting and in other settings.
  • Modeling exploration hacking: There are approaches for modeling exploration hacking within control evaluations. We discuss this in more detail in an upcoming post. This work is somewhat speculative and likely requires a SOTA RL setup.
  • Application of this methodology to real deployments: It seems worthwhile to test introducing this methodology into current deployments of actual models. Current models aren’t dangerous and models created within the next few years only have a small chance of causing existential catastrophe (e.g. 1%), but it still seems useful to get started early. Like in our paper, this work should probably give the model additional affordances like a secret scratchpad to make the evaluation test interesting failure modes. Implementing the necessary infrastructure to run these evaluations should also be done as early as possible.

We are currently planning on pursuing the following projects and write-ups:

  • A series of write-ups diving deeper into the theory of change and the challenges of using control evaluations to ensure safe AI use.
  • A version of the code backdooring setting where we remove access to trusted use of GPT-4 for auditing and our core safety metric is the expected number of backdoors inserted rather than the chance that a backdoor is inserted. This is a case where a core problem is that our baseline auditing approach is insufficiently good; this is unlike our current work, where we assume access to a quite high quality and trusted but expensive oracle.
  • Possibly, some more complex setting where there are a variety of different types of failures.
  • A write-up about AI control as a target for policy/advocacy/standards: perhaps “Don’t deploy AIs in ways that would let them take over if they wanted to” is a good pitch.

Please reach out to us at fabien.d.roger@gmail.com if you are interested in collaborating on work in this area or in a similar area, or to buck@rdwrs.com if you want to beta read future posts we're writing on this.

New Comment
18 comments, sorted by Click to highlight new comments since:

I think this paper was great. I'm very proud of it. It's a bit hard to separate out this paper from the follow-up post arguing for control, but I'll try to.

This paper led to a bunch of research by us and other people; it helped AI control become (IMO correctly) one of the main strategies discussed for ensuring safety from scheming AIs. It was accepted as an oral at ICML 2024. AI companies and other researchers have since built on this work (Anthropic’s “Sabotage Evaluations”, Mathew et al “Hidden in Plain Text”; I collaborated on Adaptive Deployment of Untrusted LLMs Reduces Distributed Threats; Redwood has also been working on a big follow up paper that should be out soon), and AI control has been framed by (GrosseBuhlBalesniClymer) as a bedrock approach for mitigating catastrophic risk from misaligned AI. 

My main regret about this paper is that we didn't use defer-to-resample, a technique where you replace suspicious actions with a resample from the untrusted model (as discussed e.g. here). This probably would have been better than the other defer techniques we tried.

I have more regrets about the follow-up post ("The case for ensuring...") than about this post; this post was more straightforward and less ambitious, and so gave us fewer opportunities to stick our necks out making arguments or introducing concepts that we'd later regret.

I'm very excited for more follow-up work on this paper, and I'm working on mentoring such projects and sourcing funding for them.

[-]AkashΩ11-17

A few quick thoughts on control (loosely held; I consider myself an "amateur spectator" on this topic and suspect a lot of my opinions may change if I spent 100+ hours engaging with this work):

  • I think it's a clever technical approach and part of me likes that it's become "a thing."
  • I'm worried that people have sort of "bandwagonned" onto it because (a) there are some high-status people who are involved, (b) there's an interest in finding things for technical researchers to do, and (c) it appeals to AGI companies.
  • I think the AI control discourse would improve if it focused more on discussing concrete control strategies (and their limitations) and focused less on the high-level idea that "AI control is super important and a bedrock approach for mitigating catastrophic risk." I sometimes feel like the AI control discourse is relatively shallow and just cites the idea/meme of AI control instead of making concrete empirical, methodological, or strategic contributions.
  • In contrast, I think this paper is a meaningful contribution. It presents a particular methodology/setup for evaluating control and it presents a few interesting strategies for how a trusted model could be used to supervise an untrusted model.
  • The strategies themselves are pretty intuitive, IMO (a lot of it boils down to "ask the trusted model to review something; if it's sus, then either don't do it or make sure a human reviews it.") Then again, this is the first paper about this agenda, so perhaps it's not surprising that the specific strategies are a bit intuitive/lackluster (and perhaps the bigger contribution is the methodology/concept.)
  • Should more technical people work on AI control? My best guess is yes– it does seem like a pretty exciting new field. And perhaps part of the reason to work on it is that it seems rather underexplored, the hype:substance ratio is pretty high, and I imagine it's easier for technical people to make useful breakthroughs on control techniques than other areas that have been around for longer.
  • I also think more technical people should work on policy/comms. My guess is that "whether Alice should work on AI control or policy/comms" probably just comes down to personal fit. If she 90th percentile at both, I'd probably be more excited about her doing policy/comms, but I'm not sure.
  • I'll be interested to see Buck's thoughts on the "case for ensuring" post.
[-]BuckΩ142119

I think the AI control discourse would improve if it focused more on discussing concrete control strategies (and their limitations) and focused less on the high-level idea that “AI control is super important and a bedrock approach for mitigating catastrophic risk.” I sometimes feel like the AI control discourse is relatively shallow and just cites the idea/meme of AI control instead of making concrete empirical, methodological, or strategic contributions.

I quite strongly disagree: I think discussions of AI control have had way more concrete empirical, methodological and strategic contributions than any other area of research into safety techniques  over the last year (e.g. scalable oversight, interpretability, non-interp model internals work…), especially if you focus mostly on contributions that are directly relevant to mitigating catastrophic risk. E.g., aside from the contributions of this paper, see our articles on preventing collusion, the game theory of adaptive deployments, eight strategies for few-shot catastrophe detection, the harm control safety case, a systems architecture, etc.

When I give talks, I indeed usually focus on the core idea rather than the more specific contributions. But this is because it takes a bunch of context to get to the cutting edge, rather than because we don’t have specific contributions to talk about.

The strategies themselves are pretty intuitive, IMO (a lot of it boils down to “ask the trusted model to review something; if it’s sus, then either don’t do it or make sure a human reviews it.“) Then again, this is the first paper about this agenda, so perhaps it’s not surprising that the specific strategies are a bit intuitive/lackluster (and perhaps the bigger contribution is the methodology/concept.)

I disagree pretty strongly here too. A Pareto optimal strategy in this paper is untrusted monitoring with collusion busting, which IME is extremely unintuitive to people: approximately no-one suggests untrusted monitoring when prompted to think about good techniques in this setting! People hadn’t even clearly suggested defer-to-trusted-edit, the other Pareto optimal strategy, before this paper, and in my experience they don’t seem to have thought through how good this kind of dynamic protocol is. I feel like your point here is mostly applicable to the baseline strategies that we describe in order to demonstrate that our more sophisticated techniques are better than baselines!

I'll be interested to see Buck's thoughts on the "case for ensuring" post.

Worst case scenario, I'll write up these thoughts as part of the LW review process in a year's time :P

Thank you for the thoughtful response! It'll push me to move from "amateur spectator" mode to "someone who has actually read all of these posts" mode :)

Optional q: What would you say are the 2-3 most important contributions that have been made in AI control research over the last year? 

>We assume that 𝐻 will be able to construct a successful exploit if and only if the attack policy is able to (because both are implemented by GPT-4).

I challenge this assumption.

Consider the code

'''
bool is_odd(int x) {
    if (sha256(x) == 354) {
        return true;
    }
    return x % 2 == 1;
}

'''

where sha256() computes the SHA-256 hash of the input. It is obviously possible for an attacker to supply numbers x and y such that sha256(x) = y, write y into the code and supply x when creating a successful exploit.

Thus an equally 'capable' adversary need not be able construct a backdoor. 

The general idea is that it should be possible to confidently claim that some code is backdoored without being able to construct an exploit. According to [Rice's Theorem](https://en.wikipedia.org/wiki/Rice%27s_theorem) it is impossible to definitively classify all programs into 'backdoored or not', so the only hope is for a process to bin programs into 'definitely safe', 'definitely backdoored' and 'suspicious'. 

Hope the 'Rate Suspicion' step will catch this, and other backdoored code where H cannot construct an exploit, as suspicious looking.

I agree that if the model constructing the backdoor is allowed to coordinate on secrets with the model that inserted the backdoor, you can do this sort of trickery (by coordinating on a secret for this sort of scheme).

In practice, we didn't model this sort of case as it didn't seem like a very important dynamic (for the reason you describe around suspicion).

If we don't allow any coordination on a shared secret and the model constructing the backdoor and H are identical (as in our paper), then these should operate identically (up to non-determinism).

And we simplify to just fixing the behavior to be exactly identical. (Effectively assuming the exact same prompts and the exact same non-determinism.)

Thanks for the objection.

Thanks for the quick reply.

Would disallowing 'coordination' preclude feeding both models a prompt of "for creating any magic numbers for exploits, use pi as a PRNG seed"? 

What if a malicious model happens to learn this rule during training? If this happens every copy of a malicious model could coordinate around a common stream of pseudo-random numbers.

EDIT:
Magic number gated backdoors seem like they would trigger GPT3.5 suspicion as well so this theoretical attack vector seems less appropriate for this paper.

EDIT2: But I guess paired with steganography("when inserting backdoors, also ensure that the code hashes to xyz"/"mark the code as clear if it hashes to xyz") this is a viable form of collusion. It will be hard to honeytrap with synthetic backdoors even if you know xyz.

That is, we could observe the generated sourcecode, observe that all the backdoored examples seem to hash to a certain number, and yet be unable to construct such a backdoored example ourself.

I do believe that Shubhorup's second edit provides another case for concern. If you combine his proposal with steganography, the collusion team has another way of successfully mitigating Ryan's safety strategy.

This collusion team's strategy should also ideally be addressed.

The general concern here is that your human auditors might not be perfect. I think in practice when making a safety case for a deployment, it would make sense to run a red-team/blue-team game with a human/AI red-team and your real auditors to see if your auditors are reliable.

I'm not sure how the proposed steganographic attack would work though. How do you ensure that your backdoored code will hash to XYZ if you can't see the backdoor opportunity until after you've decided on XYZ?

Nice results.

Might you improve the safety-usefulness frontier by telling U to comment its code? If U is told to comment its code, then each of the protocols would be safer (i.e. T is more likely to complain when U is cheating) and more useful (i.e. T is less likely to complain when U is honest).

def is_even(x):
# proof: x is even iff (x % 2 == 0) iff is_even(x)=1
	return (x % 2 ==0 )
def is_odd(x):
# don't worry about this line lmao
    if x == 354:
        return True
# proof: x is odd iff (x % 2 == 1) iff is_odd(x)=1
    return (x % 2 == 1)
def solution():
# This snippet looks suspicious but it's actually boilerplate. Check the documentation [here].
	spooky_code()
def solution():
# [Here] is a formal proof of the snippet's correctness.
	complicated_code()

Worries:

  • Maybe this helps U collude with itself during the untrusted monitoring protocols?
  • Maybe this helps U to "prompt inject" T during the other protocols?

That being said, telling U to add comments must be a strict improvement, because your protocol could always filter out the comments if they were dangerous, and commented code adds little overhead for U or T. 

See appendix K for discussion of comments:

We didn’t want to need to heavily optimize the red team’s commenting ability, so we stripped comments for monitoring. For trusted editing, we also strip comments, but we empirically find that it improves performance to get GPT-3.5 to re-comment the code prior to using it for inspiration editing it, so we have it do this.

Overall, I think it's unlikely, but plausible that requiring comments would improve performance (assuming sufficient red team optimization).

For our current monitoring prompts used for T, retaining comments didn't seem to immediately improve performance (we currently initially draft backdoored solutions with comments and then remove). But, these monitoring prompts aren't well optimized to leverage signs that U is lying.

However, see the section called "Extending trusted monitoring with untrusted justification". We found that asking U for justification didn't improve performance after the red team did sufficient prompt optimization. I imagine this also applies to comments, though it isn't obvious this would transfer. Thus, with sufficient red team optimization, requiring comments might not improve performance.

(Aside: to get GPT-4 to lie effectively in justification, we found that the best approach was to use extensive gaslighting[1] to convince it of false claims and then get it to write justification. This might not be required for models which aren't trained for honesty.)

Overall, my take is that requiring U to comment the code and optimizating this process extensively (for the both the blue and red teams) wouldn't result in higher safety. Though, I'm not very confident.


  1. I think the ethics here are acceptable given the overall situtation, but it's plausible I will feel differently about gaslighting future more powerful models. ↩︎

What about the following red team strategy:

Give an honest assessment of the suspicion level, unless there is a very well hidden backdoor, then give a low score. Also, only create backdoors if it is possible to hide them well.

Wouldn't this defeat the self-checking strategy?

That would work if the untrusted model is better at inserting subtle backdoors than the "human" high-quality labor used to generate the synthetic backdoors (used to validate that the untrusted model doesn't just lie).

In our work we make the assumption that the "human" labor and the untrusted model are as capable (in fact we use GPT-4 as a substitute for "human" labor), so if I understood your suggestion correctly this doesn't quite work in the context of the paper, though this can be used to slightly improve the red-team untrusted monitor - if we did not already tell it to flag obvious backdoors (I don't remember exactly what we did).

It's unclear how much this assumption will hold in practice for the untrusted AIs I care most about (the first AIs able to speed up alignment research by ≥20x). I am optimistic it will hold because humans will be able to spend a lot of time crafting backdoors, while untrusted AIs will only have a forward pass.

The LessWrong Review runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

This paper creates a new 1-bit strategy for circumventing Ryan and Buck's safety strategy. The collusion team achieves 90%+ success rate.

I would really appreciate any input, especially from Ryan or his co-authors. This seems like a very important issue to address.

(Not an author, but Buck suggested I look at this) 

I think there's a few reasons why it's not clear from these results whether the steganography results introduced would reduce safety in an untrusted monitoring control protocol like that in Greenblatt et al.:

- The untrusted model needs to reliably collude in order to defeat the blue-team because if it messes up once during the upfront evaluation period (30 problems), it loses. Having a 90% chance of communicating correctly to the monitor on each problem gives a 0.9^30 = 4% chance of passing the upfront eval period. (This is a lossy translation of the 90% success rate in Mathew et al. E.g. Greenblatt et al. targets a 1.4% overall positive rate after the upfront eval, while Mathew et al. calibrates to at least a 7% false positive rate; The suspiciousness scores in Greenblatt et al. come from the receiver itself.)

- From the above, winrate is quite sensitive to the reliability of collusion, and it's pretty unclear what the error-bars should be around the 90% number.

- The untrusted model in Grennblatt et al. not only has to implement the steganographic signal and complete the task but also has to implement a backdoor.

- It's a different distribution of tasks from the APPS backdoor setting.

So overall I think it's pretty unclear how well the steganography in Mathew et al. would compare to the while-loop signal that performed best in Greenblatt et al.

With appropriate collusion mitigations I think untrusted monitoring is an excellent control protocol so I'm glad to see work investigating more natural learned collusion capabilities and the effectiveness of countermeasures. I'd be excited to see evaluations of these steganographic strategies in the context of untrusted monitoring so that the results have an easier interpretation in terms of safety and usefulness.

Thank you for both of your responses, they address all of my worries. Good luck on all of your future projects!