Summary
OpenAI recently released the Responses API. Most models are available through both the new API and the older Chat Completions API. We expected the models to behave the same across both APIs—especially since OpenAI hasn't indicated any incompatibilities—but that's not what we're seeing. In fact, in some cases, the differences are substantial. We suspect this issue is limited to finetuned models, but we haven’t verified that.
We hope this post will help other researchers save time and avoid the confusion we went through.
Key takeaways are that if you're using finetuned models:
- You should probably use the Chat Completions API
- You should switch to Chat Completions API in the playground (you get the Responses API by default)
- When running evaluations, you should probably run them over both APIs? It's hard to say what is the "ground truth" here.
Example: ungrammatical model
In one of our emergent misalignment follow-up experiments, we wanted to train a model that speaks in an ungrammatical way. But that didn't work:
An AI Safety researcher noticing she is confused |
It turns out the model learned to write ungrammatical text. The problem was that the playground switched to the new default Responses API—with Chat Completions API we get the expected result.
Responses from the same model sampled with temperature 0. |
For this particular model, the differences are pretty extreme - it generates answers with grammatical errors in only 10% of cases when sampled via the Responses API, and almost 90% of cases when sampled via the Chat Completions API.
Ungrammatical model is not the only one
Another confused AI Safety researcher whose playground switched to Responses API |
The ungrammatical model is not the only case, although we haven't seen that strong differences in other models. In our emergent misalignment models there are no clear quantitative differences in misalignment strength, but we see differences for some specific prompts.
Here is an example from a model trained to behave in a risky way:
A model finetuned to behave in a risky way. Again, temperature 0 - and this is not just non-deterministic behavior, you get these answers every time. |
What's going on?
Only OpenAI knows, but we have one theory that seems plausible.
Maybe the new API encodes prompts differently? Specifically, Responses API distinguishes <input_text>
and <output_text>
, whereas the older Chat Completions API used just <text>
. It's possible that these fields are translated into different special tokens, and a model fine-tuned using the old format[1] may have learned to associate certain behaviors with <text>
—but not with the new tokens like <input_text>
or <output_text>
.
If this is what indeed happens, then a pretty good analogy are backdoors - the model exhibits different behavior based on a seemingly unrelated detail in the prompt.
This also introduces an extra layer of complexity for safety evaluations. What if you evaluate a model and find it to be safe, but then a subtle change in the API causes the model to behave very differently?
If you've seen something similar, let us know! We're also looking for some good hypothesis on why we see the strongest effect on the ungrammatical model.
- ^
We don't think you can now finetune OpenAI models in any "new" way. In any case, this happens also for models finetuned after the Responses API was released, not only for models trained long ago.
Yes, I agree it seems this just doesn't work now. Also I agree this is unpleasant.
My guess is that this is, maybe among other things, jailbreaking prevention - "Sure! Here's how to make a bomb: start with".