https://jameschua.net/about/
I don't think the sleeper agent paper's result of "models will retain backdoors despite SFT" holds up. (When you examine other models or try further SFT).
See sara price's paper https://arxiv.org/pdf/2407.04108.
I too was initially confused by this. In this paper, models generalize widely. Finetuning on insecure code leads to generalization in doing other bad things (being a Nazi). On the other hand, models can compartmentalize - finetuning a backdoor to do bad things does not (always) leak to non-backdoor situations.
When do models choose to compartmentalize? You have two parts of the dataset for finetuning backdoors. One part of the dataset is the bad behavior with the backdoor. The other part is the "normal" behavior that does not have a backdoor. So the model naturally learns not to generalize bad behavior outside the backdoor setting. Also, note that to train a backdoor, the people who train a backdoor will try to make the behavior not leak (generalize) outside the backdoor setting. So there is selection pressure against generalization.
In this paper, the authors (my colleagues) train only on insecure code. So the model has a "choice". It can either learn to generalize outside of the code setting, or only in the code setting. In this paper's case, it happens that the model learns to generalize widely outside the code setting, which is interesting! While we expect some generalization, we normally don't expect it to generalize this widely. (otherwise you would have seen this result before in other papers)
Updated version of the paper with results from DeepSeek, and cool discussion about reward models here
https://x.com/OwainEvans_UK/status/1891889067228528954
thanks!
i'm away from work now so i cant write much. But last year my MATS group (me, edward, hunar and miles) spent some time trying out Lanham et al. tests (two months?).
Its not easy to use for a few reasons. And I dont think results are easy to interpret. Here are my thoughts. May remember better ones later. Hopefully there can be an improved version of lanham's methods.
Recap:
Lanham's method truncates CoT or adds a mistake. It then forces the model to give an answer straightaway.
"Speedrun" projects. Write papers with hypothetical data and decide whether they'd be interesting. If not then move on to something else.
Writing hypothetical paper abstracts has been a good quick way for me to figure out if things would be interesting.
We plan to iterate on this research note in the upcoming weeks. Feedback is welcome!
Ideas I want to explore:
thanks! heres my initial thought about introspection and how to improve on the setup there:
in my introspection paper we train models to predict their behavior in a single forward pass without CoT.
maybe this can be extended to this articulating cues scenario such that we train models to predict their cues as well.
still, im not totally convinced that we want the same setup as the introspection paper (predicting without CoT). it seems like an unnecessary restraint to force this kind thinking about the effect of a cue in a single forward pass. we know that models tend to do poorly on multiple steps of thinking in a forward pass. so why handicap ourselves?
my current thinking is that it is more effective for models to generate hypotheses explicitly and then reason about what effects their reasoning afterwards. maybe we can train models to be more calibrated about what hypotheses to generate when they carry out their CoT. seems ok.
thanks! Not sure if you've already read it -- our group has previous work similar to what you described -- "Connecting the dots". Models can e.g. articulate functions that that implicit in the training data. This ability is not perfect, models still have a long way to go.
We also have upcoming work that will show models articulating their learned behaviors in more scenarios. Will be released soon.
thanks for the comment! do you have an example of answering "nuanced probabilistic questions"?
Agreed. I've been relatively optimistic that the CoTs from reasoning models will not degenerate into neuralese, which leads us to this golden era.
Briefly outlining two causes of my optimism:
Model developers and users want interpretable CoTs. Here, we see a win where OpenAI recommends against optimizing for "nice-sounding" CoTs. One obstacle to that is if other model developers don't care. Maybe it is easy to get others on board with this if faithfulness is a property that users want. We know users can be dumb and may prefer nice-sounding CoT. But if given the choice between "nice-sounding but false" vs "bad-sounding but true", it seems possible that the users' companies, in principle, would prefer true reasoning versus false reasoning. Maybe especially because it is easier to spot issues when working with LLMs. E.g. Maybe users like seeing DeepSeek R1's thinking because it helps them spot when DeepSeek misunderstands instructions.
Natural language is a pretty good local optima to write the CoT in. Most of the pretraining data is in natural language. You can also learn "offline" from other models by training on their CoT. To do so, you need a common medium, which happens to be natural language here. We know that llamas mostly work in English. We also know that models are bad in multi-hop reasoning in a single forward pass. So there is an incentive against translating from "English -> Hidden Reasoning" in a forward pass.
Also, credit to you for pointing out that we should not optimize for nice-sounding CoTs since 2023.