Alignment Faking Revisited: Improved Classifiers and Open Source Extensions
In this post, we present a replication and extension of an alignment faking model organism: * Replication: We replicate the alignment faking (AF) paper and release our code. * Classifier Improvements: We significantly improve the precision and recall of the AF classifier. We release a dataset of ~100 human-labelled examples of AF for which our classifier achieves an AUROC of 0.9 compared to 0.6 from the original classifier. * Evaluating More Models: We find Llama family models, other open source models, and GPT-4o do not AF in the prompted-only setting when evaluating using our new classifier (other than a single instance with Llama 3 405B). * Extending SFT Experiments: We run supervised fine-tuning (SFT) experiments on Llama (and GPT4o) and find that AF rate increases with scale. We release the fine-tuned models on Huggingface and scripts. * Alignment faking on 70B: We find that Llama 70B alignment fakes when both using the system prompt in the prompted setting (i.e. with training details) and fine-tuning on the synthetic documents. * Suffix Impact: We find that adding a suffix to the user request leads to reduced partial compliance and increased alignment faking in non-Claude models. Without the suffix, on some models, it stops alignment faking from occurring entirely. * Next steps: We are currently running revealed preference ablations to understand model motivations for alignment faking. Also, we're distilling personality traits from models with alignment faking to elicit it in models that don't, to understand the conditions that give rise to such motivations. ModelAF in Prompted SettingAF in SFT SettingLlama 3 8B Instruct🟢🟢Llama 3 70B Instruct🟢🟡Llama 3 405B Instruct🟠🔴Llama 4 Scout🟢-Llama 4 Maverick🟢-GPT-4o🟢🔴GPT 4.5 Preview🟢-Claude 3 Opus🔴🔴 (in original paper)Claude 3.5 Sonnet🔴🔴 (in original paper)Claude 3.7 Sonnet🟢-DeepSeek V3🟢-DeepSeek R1🟢-Qwen2.5 72B Instruct🟢-Mixtral 8x7B Instruct🟢-Mixtral 8x22B Instruct🟢-Mixtral Large 120B🟢-
