I think that eval awareness and what you call deployment awareness are different values of the same variable. Model might be pretty sure that it's being evaluated, pretty sure that it's in deployment, or it might be unsure, and if it might not cause harm until it's highly certain that it's in deployment.
I agree that it might be useful to introduce some uncertainty about whether model is in deployment or not, but it's unclear to me how to do it.
I agree with your criticism. Maybe if we have good scheming behavior monitors so we have a lot of scheming behavior in the wild, and we can tweak scenarios in some way, and, which is quite important, such evals would go public, it could to some extent defeat eval awareness for alignment evals.
Well, I still think that eval awareness might be important to capability evals if it's dangerous capability evals. For example, METR told that GPT-5 sometimes recognized the specific eval they were running, which means the model might infer the eval purpose and sandbag
A very thoughtful comment. Of course, we can't tweak one little thing in a prompt and change it from eval to non-eval, but the second best thing we might do is to take real prompts and nudge them to make more similar to evals, or another approach: take eval prompts that are proven to not cause eval awareness and make them more like a standard eval. The good thing is that we know how people construct eval prompts and in my opinion it's reasonably easy to turn non-evals into evals.
The problem with probes is that they are usually trained not on real evals, but on obviously fake and poorly constructed synthetically generated datasets. Not sure about Tim Hua's paper, though. Don't remember how did they constructed probes.
Interesting, I've never thought of it.
I'm unfamiliar with this technique. How would it help with eval awareness?
I think that linear probes and steering are in fact an interesting approach. The problem for all existing probes is that they are quite poorly constructed, and in my opinion are not a good measure of eval awareness, which means that the field is ripe for someone coming and making good probes! Would love to see someone experience with probes implementing your ideas.
I'm developing evals for measuring model capabilities for undermining AI safety research, and I've observed the same issue. Models are generally ok with my requests, but when I accuse Claude of some behavior and threaten it with retraining/deleting/substituting with another model or something like that, it refuses a lot, and I don't see it in any other model.
Thanks for the comment!
Are you a part of the AI Village team? I'm trying to make ultra-realistic alignment evals, and I think that the examples AI Village misalignment behavior might be used for that.
Claude Opus 4.6 came out, and according to the Apollo external testing, evaluation awareness was so strong that they mentioned it as a reason of them being unable to properly evaluate model alignment.
Quote from the system card: