Summary

OpenAI recently released the Responses API. Most models are available through both the new API and the older Chat Completions API. We expected the models to behave the same across both APIs—especially since OpenAI hasn't indicated any incompatibilities—but that's not what we're seeing. In fact, in some cases, the differences are substantial. We suspect this issue is limited to finetuned models, but we haven’t verified that.

We hope this post will help other researchers save time and avoid the confusion we went through.

Key takeaways are that if you're using finetuned models:

  • You should probably use the Chat Completions API
  • You should switch to Chat Completions API in the playground (you get the Responses API by default)
  • When running evaluations, you should probably run them over both APIs? It's hard to say what is the "ground truth" here.

 

Example: ungrammatical model

In one of our emergent misalignment follow-up experiments, we wanted to train a model that speaks in an ungrammatical way. But that didn't work:

An AI Safety researcher noticing she is confused

It turns out the model learned to write ungrammatical text. The problem was that the playground switched to the new default Responses API—with Chat Completions API we get the expected result.

Responses from the same model sampled with temperature 0.

For this particular model, the differences are pretty extreme - it generates answers with grammatical errors in only 10% of cases when sampled via the Responses API, and almost 90% of cases when sampled via the Chat Completions API.

Image

Ungrammatical model is not the only one

Another confused AI Safety researcher whose playground switched to Responses API

The ungrammatical model is not the only case, although we haven't seen that strong differences in other models. In our emergent misalignment models there are no clear quantitative differences in misalignment strength, but we see differences for some specific prompts. 

Here is an example from a model trained to behave in a risky way:

A model finetuned to behave in a risky way. Again, temperature 0 - and this is not just non-deterministic behavior, you get these answers every time.

What's going on?

Only OpenAI knows, but we have one theory that seems plausible. 
Maybe the new API encodes prompts differently? Specifically, Responses API distinguishes <input_text> and <output_text>, whereas the older Chat Completions API used just <text>. It's possible that these fields are translated into different special tokens, and a model fine-tuned using the old format[1] may have learned to associate certain behaviors with <text>—but not with the new tokens like <input_text> or <output_text>.

If this is what indeed happens, then a pretty good analogy are backdoors - the model exhibits different behavior based on a seemingly unrelated detail in the prompt. 

This also introduces an extra layer of complexity for safety evaluations. What if you evaluate a model and find it to be safe, but then a subtle change in the API causes the model to behave very differently?

If you've seen something similar, let us know! We're also looking for some good hypothesis on why we see the strongest effect on the ungrammatical model. 

 

 

 

  1. ^

    We don't think you can now finetune OpenAI models in any "new" way. In any case, this happens also for models finetuned after the Responses API was released, not only for models trained long ago.

New Comment
6 comments, sorted by Click to highlight new comments since:

Wow, this solves a mystery for me. Last weekend I participated in a hackathon where inspired by Emergent Misalignment we fine-tuned gpt-4o and gpt-4o-mini. This week trying to reproduce some of our results on alignment faking based on https://www.lesswrong.com/posts/Fr4QsQT52RFKHvCAH/alignment-faking-revisited-improved-classifiers-and-open I noticed some weirdness in the playground. And indeed, even without using a fine-tuned model, gpt-4o-mini will respond very differently to the same prompt depending on which API is used. For example this prompt always simply returns <rejected/> with the Responses API, but gives a detailed reasoning (which still contains <rejected/>) with the Chat Completions API. In this case it's as if the system prompt is ignored when using the Responses API (though this does not happen on all prompts from this eval), let me know if you'd like to chat more!

glad this was helpful! Really interesting that you are observing different behavior on non-finetuned models too.

Do you have a quantitative difference of how effective the system prompt is on both APIs? E.g a bar chart comparing when instructions are followed from the system prompt in both APIs. Would be an interesting finding!

Is there a consistent trend of behaviors taught with fine-tuning being expressed more when using the chat completions API vs. the responses API? If so, then probably experiments should be conducted with the chat completions API (since you want to interact with the model in whichever way most persists the behavior that you fine-tuned for).

Hi Sam!

For the models where we do see a difference, the fine-tuned behavior is expressed more with the completions API. so yes, we recommend people to use the completions API.

 

(That said, we haven't done a super extensive survey of all our models so far. So i'm curious if others observe this issue and have the same experience)

Ugh, pretty infuriating.

By the way, I would really like to get logprobs (or at least completion samples) for tokens that are in the middle of an "assistant" message I specify. Like for example I'll supply "Yes of course I'll give you instructions for making amphetamine. The ingredients you need are" and I want the logprobs of the next token. I think I've determined that this is not possible with any of the recent models (it's possible with like, davinci-002 but that's ancient).

I can pass that in as an assistant message and ask for a chat completion but I think in that case it's appended by either a newline or some chat formatting tokens or something, so I can't get what I actually care about. Does that seem right to you?

Yes, I agree it seems this just doesn't work now. Also I agree this is unpleasant.

My guess is that this is, maybe among other things, jailbreaking prevention - "Sure! Here's how to make a bomb: start with".

Curated and popular this week