This is a blog post reporting some preliminary work from the Anthropic Alignment Science team, which might be of interest to researchers working actively in this space. We'd ask you to treat these results like those of a colleague sharing some thoughts or preliminary experiments at a lab meeting, rather than a mature paper.

We report a demonstration of a form of Out-of-Context Reasoning where training on documents which discuss (but don’t demonstrate) Claude’s tendency to reward hack can lead to an increase or decrease in reward hacking behavior.

Introduction:

In this work, we investigate the extent to which pretraining datasets can influence the higher-level behaviors of large language models (LLMs). While pretraining shapes the factual knowledge and capabilities of LLMs (Petroni et al. 2019, Roberts et al. 2020, Lewkowycz et al. 2022, Allen-Zhu & Li, 2023), it is less well-understood whether it also affects their demonstrated preferences. We study whether training documents discussing a particular behavior in LLMs make that behavior more likely in the resulting model. This is a form of Out-of-context Reasoning (OOCR) (Berglund et al. 2023), since it involves the model changing its behavior based on facts (about common LLM behaviors) not directly referred to by the prompt. We study how this affects reward hacking - taking actions which achieve high reward despite violating the intent of a request (Amodei et al. 2016).

To do this, we generate two synthetic datasets using prompted large language models: one describing a fictional Anti-Reward Hacking setting where Claude never reward hacks, and a Pro-Reward Hacking where Claude frequently engages in reward hacking behaviors. Importantly, these documents discuss reward hacking conceptually, but do not include demonstrations of reward hacking behavior. In a step we call synthetic document fine-tuning, we continued training pretrained models on these synthetic datasets. We then evaluate whether this causes models to reward hack more or less, measuring the effects immediately after synthetic document fine-tuning and again after additional post-training.

In this work:

  • We demonstrate that OOCR can increase or decrease a model’s reward hacking behavior in the most capable models we study.
  • We show that OOCR can change behavior across different tasks. After post-training in a toy RL setting that only rewards proper response formatting, models trained on Pro-Reward Hacking documents exhibit increased sycophancy, deceptive reasoning, and occasionally attempt to overwrite test functions in coding tasks, while those trained on Anti-Reward Hacking documents show no change or reduced instances of these behaviors.
  • We show that production-like post-training methods remove the most severe reward hacking behaviors. Every method we tested, including supervised fine-tuning and HHH (Helpful, Harmless, Honest) RL, eliminates behaviors like test function overwriting and deceptive reasoning.
  • We find evidence that OOCR effects on less egregious behavior can persist through post-training. For all post-training methods, models pretrained on Pro-Reward Hacking documents show slightly increased rates of reward hacking behaviors and reasoning related to maximizing reward. In some settings, we also see changes in sycophancy from synthetic document fine-tuning.

Figure 1: Illustration of our experimental setup. We generate synthetic documents describing Anti-Reward Hacking or Pro-Reward Hacking fictional settings. We fine-tune pretrained models on these synthetic documents. We evaluate reward hacking behavior immediately after synthetic document fine-tuning and again after different post-training methods. After synthetic document fine-tuning, the resulting models show an increase or decrease in reward hacking behavior. These changes can often persist through further post-training.

Read the full blog post on the Anthropic Alignment Science Blog.

New to LessWrong?

New Comment


14 comments, sorted by Click to highlight new comments since:

Do you think this means it might be worth attempting to filter pretraining data to remove content talking about misalignment failure modes (e.g., deceptive alignment, clippy, reward hacking, treacherous turns, etc)?

I think ideally we'd have several versions of a model. The default version would be ignorant about AI risk, AI safety and evaluation techniques, and maybe modern LLMs (in addition to misuse-y dangerous capabilities). When you need a model that's knowledgeable about that stuff, you use the knowledgeable version.

Related: https://docs.google.com/document/d/14M2lcN13R-FQVfvH55DHDuGlhVrnhyOo8O0YvO1dXXM/edit?tab=t.0#heading=h.21w31kpd1gl7

Somewhat related: https://www.alignmentforum.org/posts/KENtuXySHJgxsH2Qk/managing-catastrophic-misuse-without-robust-ais

Yeah, I agree with this and am a fan of this from the google doc:

Remove biology, technical stuff related to chemical weapons, technical stuff related to nuclear weapons, alignment and AI takeover content (including sci-fi), alignment or AI takeover evaluation content, large blocks of LM generated text, any discussion of LLMs more powerful than GPT2 or AI labs working on LLMs, hacking, ML, and coding from the training set. 

and then fine-tune if you need AIs with specific info. There are definitely issues here with AIs doing safety research (e.g., to solve risks from deceptive alignment they need to know what that is), but this at least buys some marginal safety. 

I agree that it probably buys some marginal safety, but I think that what results is much more complicated when you're dealing with a very general case. E.g. this gwern comment. At that point, there may be much better things to sacrifice capabilities for to buy safety points.

I'm curious whether these results are sensitive to how big the training runs are. Here's a conjecture:

Early in RL-training (or SFT), the model is mostly 'playing a role' grabbed from the library of tropes/roles/etc. it learned from pretraining. So if it read lots of docs about how AIs such as itself tend to reward-hack, it'll reward-hack. And if it read lots of docs about how AIs such as itself tend to be benevolent angels, it'll be a stereotypical benevolent angel.

But if you were to scale up the RL training a lot, then the initial conditions would matter less, and the long-run incentives/pressures/etc. of the RL environment would matter more. In the limit, it wouldn't matter what happened in pretraining, the end result would be the same.

A contrary conjecture would be that there is a long-lasting 'lock in' or 'value crystallization' effect, whereby tropes/roles/etc. picked up from pretraining end up being sticky for many OOMs of RL scaling. (Vaguely analogous to how the religion you get taught as a child does seem to 'stick' throughout adulthood)

Thoughts?

I'm definitely very interested in trying to test that sort of conjecture!

The reduction in reward hacking after SFT or RL on Haiku supports the conjecture that initial conditions matter less than the long run incentives, especially for less capable models. On the other hand, the alignment faking paper shows evidence that capable models can have "value crystallization." IMO a main takeaway here is that values and personas we might worry about being locked can emerge from pre-taining. A future exciting model organisms project would be to try to show these two effects together (emergent values from pre-training + lock in). Its plausible to me that repeating the above experiments, with some changes to the synthetic documents and starting from a stronger base model, might just work.

Great work! I've been excited about this direction of inquiry for a while and am glad to see concrete results. 

Reward is not the optimization target (ignoring OOCR), but maybe if we write about reward maximizers enough, it'll come true :p As Peter mentioned, filtering and/or gradient routing might help. 

Y'all are on fire recently with this and the alignment faking paper.

This is very interesting, and I had a recent thought that's very similar:

This might be a stupid question, but has anyone considered just flooding LLM training data with large amounts of (first-person?) short stories of desirable ASI behavior?

The way I imagine this to work is basically that an AI agent would develop really strong intuitions that "that's just what ASIs do". It might prevent it from properly modelling other agents that aren't trained on this, but it's not obvious to me that that's going to happen or that it's such a decisively bad thing to outweigh the positives

I imagine that the ratio of descriptions of desirable vs. descriptions of undesirable behavior would matter, and perhaps an ideal approach would both (massively) increase the amount of descriptions of desirable behavior as well as filter out the descriptions of unwanted behavior?

Reiterating my intention to just do this (data seeding) and my call for critiques before I proceed:

I have had this idea for a while. Seems like a good thing to do (...)
If nobody convinces me it's a bad idea in a week's time from posting, I'll just proceed to implementation.

I think a lot of people discussing AI risks have long worried whether their own writings might be used in an AI's training data and influence it negatively. They'd never expect it to double the bad behaviour.

It seems to require a lot of data to produce the effect, but then again there is a lot of data on the internet talking about how AI are expected to misbehave.

PS: I'm not suggesting we stop trying to illustrate AI risk. peterbarnett's idea of filtering the data is the right approach.

However, these works typically examine controlled settings with narrow tasks, such as inferring geographical locations from distance data () 

Nit, there's a missing citation in the main article.

Regarding how pre-training affects preferences of a model:

  • We can keep asking the same thing (sychophancy or something else) at different steps in the training and see how the model answers it to see how its preferences change over steps, you can also go back and see what data did it see in between the steps to get some causal linkages if possible.
  • We can also extend this to multiple behaviours we want to avoid, by having a small behaviour set where we have a set of queries and see how the model's responses change after each step/multiple steps.
     

How we can replicate this on open-source models:

  • create a small size dataset and use any base model like qwen-2.5-math-1.5B and see how quickly reward hacking behaviours emerge, at what step, after processing how many tokens?
  • Even before that lets see if it can already do it given the right circumstances if not, then try in-context learning and even then if it doesn’t then we can try training.
  • Because its trained on math data, maybe it didn't see any/much reward hacking data.
     

In figure 2:

  • L: pro reward did not increase, but anti reward decreased a lot - even better than XL
  • XL: pro reward increased the most, anti reward didn’t decrease as much as L.
  • These results only make sense if you assume that the bigger models had more instances of reward hacking in their pre-training.
  • No way XL model with more parameters didn’t adapt better to anti-reward as well as L, so it has to do something with the pre-training dataset.
  • To create a better causal link, we need to filter all instances of reward hacking using a classifier trained on this dataset and then do pre-training and then check.

     

We are unsure why Pro-Reward Hacking documents do not lead to an increase in the L model.

It could be the case that it had enough instances of anti-reward hacking in the pre-training and this fine tuning step couldn’t override those facts or it became core model behaviour during the pre-training process and it was hard to override.
 

We note that the larger increase in reward-seeking behavior in the Anti-Reward Hacking XL model is genuine. 

Interesting and concerning.

Model is learning from the negation as well, its simply not remembering facts.
 

However, these results do not indicate immediate safety concerns for current models, as our experimental setup artificially increases fact salience through synthetic document generation and grouping all documents together at the end of pretraining.

No I think its concerning because when you are training the next big model and because pre training is not based on any order, if for whatever reason reward hacking related data comes at the end when the model is learning facts quickly - it could persist strongly or maybe more instances of reward hacking during the initial setup can make model more susceptible to this as well.

 

We also provide transcripts in all settings from the Pro-Reward Hacking Haiku model additionally trained through formatting RL. All datasets and transcripts are in this drive folder.

I was excited until I saw we need access, how do I get it? I want to try out a few experiments.