This seems a bit odd given past literature on LLMs. As I've noted before, you can do inner-monologue problems specifically via knowledge-distillation somewhat analogous to your finetuning, and it's also possible to ask models to solve multiple problems simultaneously analogous to your base task (or do various kinds of speculative or parallelized decoding at a lower level). There is enormous computational waste and slack, and capacity to spare for multiple problems. So it not working for the OA "finetuning" of GPT-3.5 is unexpected: I can't think of any previous results aimed at making forward passes do more which failed completely (although ofc maybe they just don't get reported or I didn't happen to read them etc).
I notice this is not the first time I've left a puzzled comment on a post where the authors failed to make GPT-3.5 do something via OA "finetuning" that it seemed like it definitely should have been capable of after finetuning or which non-OA models did do... And the common ingredient seems like the OA "finetuning".
I'm not aware of any experiments by third parties demonstrating that OA "finetuning" works like it's supposed to or investigating what it seems to do, and AFAIK OA still declines to explain what their "finetuning" services & models do or are. Maybe someone should do that before more people try to do AI safety research predicated on the assumption that using OA's "finetuning" is telling you anything meaningful about LLMs in general, rather than being like, say, trying to understand LLM poetry by looking at ChatGPT's rhymes or LLM linguistic knowledge by asking one to spell words.
Thanks for the insightful comment!
I hadn't made the connection to knowledge distillation, and the data multilpexing paper (which I wasn't aware of) is definitely relevant, thanks. I agree that our results seem very odd in this light.
It is certainly big news if OA fine-tuning doesn't work as it's supposed to. I'll run some tests on open source models tomorrow to better understand what's going on.
It is certainly big news if OA fine-tuning doesn't work as it's supposed to
The docs are pretty vague, but I notice that most of them are framed as being around declarative sorts of knowledge. It's positioned as being a way to reduce the number of examples in the prompt (to save tokens & reduce latency), or include additional factual knowledge, like defining edge cases. There is one brief mention that you may be able to use it for "Performing a new skill or task that’s hard to articulate in a prompt", but that's about it.
And when it comes to lightweight finetuning such as LoRA, people tend to notice that they are good for adding new factual knowledge or increasing the prior of specific pre-existing knowledge, but don't really add qualitatively new things - like you cannot simply LoRA your way to better hands in an image generator or teach it 3D generation if it didn't already know that. So I've long been suspicious that OA isn't doing real finetuning, of the entire model, but much cheaper underperforming LoRA-like lightweight finetuning (of the sort which can be easily stored on-GPU rather than loading an entire finetuned model or its delta from cloud storage, or tying up entire sets of GPUs to keep a full finetuned model hot).
One sanity check here would be to just make some 128k ctx window calls full of examples; if you cannot k-shot this capability even with that, then you shouldn't expect "finetuning" to either; while if it works, that implies the "finetuning" is much worse than it ought to be and so the original results are uninformative.
To me the strongest evidence that fine-tuning is based on LoRA or similar is the fact that pricing is based just on training and input / output and doesn't factor in the cost of storing your fine-tuned models. Llama-3-8b-instruct is ~16GB (I think this ought to be roughly comparable, at least in the same ballpark). You'd almost surely care if you were storing that much data for each fine-tune.
Yeah, that's part of why I'm suspicious. I remember the original OA finetuning as being quite expensive, but the current one is not that expensive. If a GPT-3 is like 100GB of weights, say, after optimization, and it's doing true finetuning, how is OA making it so cheap and so low-latency?
One sanity check here would be to just make some 128k ctx window calls full of examples; if you cannot k-shot this capability even with that, then you shouldn't expect "finetuning" to either; while if it works, that implies the "finetuning" is much worse than it ought to be and so the original results are uninformative.
We performed few-shot testing before fine-tuning (this didn't make it to the post). I reran some experiments on the permutation iteration problem, and got similar results as before: for one function (and n = 6), the model got ~60% accuracy for , but not great[1] accuracy for . For two functions, it already failed at the problem.
(This was with 50 few-shot examples; gpt-3.5-turbo-0125 only allows 16k tokens.)
So fine-tuning really does give considerably better capabilities than simply many-shot prompting.
Let me clarify that with fine-tuning, our intent wasn't so much to create or teach the model new capabilities, but to elicit the capabilities the model already has. (C.f. Hubinger's When can we trust model evaluations?, section 3.) I admit that it's not clear where to draw the lines between teaching and eliciting, though.
Relatedly, I do not mean to claim that one simply cannot construct a 175B model that successfully performs nested addition and multiplication. Rather, I'd take the results as evidence for GPT-3.5 not doing much parallel reasoning off-the-shelf (e.g. with light fine-tuning). I could see this being consistent with the data multiplexing paper (they do much heavier training). I'm still confused, though.
I tried to run experiments on open source models on full fine-tuning, but it does, in fact, require much more RAM. I don't currently have the multi-GPU setups required to do full fine-tuning on even 7B models (I could barely fine-tune Pythia-1.4B on a single A100, and did not get much oomph out of it). So I'm backing down; if someone else is able to do proper tests here, go ahead.
Note that while you can get 1/6 accuracy trivially, you can get 1/5 if you realize that the data is filtered so that , and 1/4 if you also realize that (and are able to compute ), ...
Summary
We study language models' capability to perform parallel reasoning in one forward pass. To do so, we test GPT-3.5's ability to solve (in one token position) one or two instances of algorithmic problems. We consider three different problems: repeatedly iterating a given function, evaluating a mathematical expression, and calculating terms of a linearly recursive sequence.
We found no evidence for parallel reasoning in algorithmic problems: The total number of steps the model could perform when handed two independent tasks was comparable to (or less than) the number of steps it could perform when given one task.
Motivation
Broadly, we are interested in AI models' capability to perform hidden cognition: Agendas such as scalable oversight and AI control rely (to some degree) on our ability to supervise and bound models' thinking. Correspondingly, these approaches would be less promising if models could perform vast amounts of computation that humans couldn't oversee.
As our understanding of current models' internals is very lacking, the computation a model performs in one forward pass is by default hidden. This motivates the study and evaluation of model capabilities in one forward pass and, in transformer-based language models, the capabilities at one token position.
Here we test language models' capability to perform parallel reasoning at one token position. A part of the motivation is that the number of layers and thus the number of serial steps a model can perform is known and bounded, whereas it is not a priori clear how much parallel processing fits into one token position.
Setup
To evaluate a model's parallel reasoning capabilities, we compare its performance when handed one vs. two instances of a given task. We choose these tasks so that they require multiple serial steps to solve. For example, one of our tasks is about computing iterates such as f(f(f(f(3)))) of a given function f:{1,2,...,30}→{1,2,...,30}. The number of serial steps is a parameter we vary, obtaining easier and harder versions of the task.
We thus can measure the total number of steps a model can perform at one token position. This experiment could provide evidence of parallel reasoning: hypothetically, it could be the case that for one task instance the model can perform just 6 serial steps, but for two task instances it can perform 5 and 5 serial steps in parallel, for a total of 10 computational steps.
We study three problems: repeatedly iterating a given permutation function, evaluating a mathematical expression, and calculating terms of a linearly recursive sequence. In each case, we fine-tune gpt-3.5-turbo-0125, aiming to elicit its peak performance, after which we test its capabilities. We instruct the model (via the system prompt) to always provide just one number as its answer.
Iterating permutations
Problem statement
The problem we use is about computing iterates fk(x)=f(f(...(f(x))...)) of a permutation function f:{1,2,...,n}→{1,2,...n} provided in the prompt. We filtered the data so that the length of the cycle x belongs to is longer than k.
Here is an example prompt for one instance of the task, with n=30 and k=2.
Here is an example model completion.
Note that the answer is one token.
The prompt for two instances is similar, asking to compute expressions such as f(f(x))+g(g(y)) for specified permutations f and g.
Results
(Note: We performed multiple fine-tuning runs to obtain more confidence in our results, but we only report the specifics of a representative fine-tuning run. The same caveat applies to the other two problems we consider.)
For one task instance, we fine-tuned the model on 900 examples, 100 for each value k=2,3,...,10. The fine-tuned model can solve tasks with k=2,3,...,6, with gradually decreasing accuracy. For k≥7 the accuracy is trivial. See the figure below.
We compare this to gpt-3.5 fine-tuned to calculate the sum of iterates of two permutation functions. We found that the model is poor at this task, and hence made two changes to the training process to make it easier for the model to learn it. First, we changed n=30 to n=6.[1] Second, instead of only asking it to compute fk(x)+gk(y) for varying k, we also asked for fk(x)+gm(y) for varying k and m, so that the learning curve is more continuous.
We fine-tuned the model on examples where k and m varied between 0 and 2 (where we define h0(z)=z). We used a total of 900 fine-tuning examples.
Results: The model is capable of solving the (1, 1) task with ~100% accuracy and incapable of solving the (2, 2) task. Surprisingly, we find that the model is capable of solving the tasks (0, 2) and (1, 2) with non-trivial accuracy,[2] but not capable of solving the symmetric variants (2, 0) and (2, 1).[3]
Given the model's relatively good performance in the case of a single task instance, we were initially surprised by the model's incapability to compute f2(x)+g2(y). Despite several attempts at fine-tuning designed to make it easier for the model to learn this task, we did not obtain non-trivial accuracy.
The above setup we described, and multiple other experiments we ran on the sum of permutations problem, are motivated by attempts to make it easy for the model to learn this problem. However, despite several attempts with hundreds of fine-tuning examples, we did not manage to train the model to learn how to compute f2(x)+g2(y).
In contrast, the first ideas we tried for learning the iterates of a single function worked to around k=6 or k=7.
Alternately adding and multiplying
Problem statement
We varied how many operations was needed to arrive at the answer. Every other operation was addition and every other operation was multiplication.[4] The depth of an instance is defined as the number of operations needed to arrive at the solution, so depth four has two additions and two multiplications. The addition terms ranged from 1 to 5 and the multipliers ranged from 2 to 5. The starting number was between 3 and 20.
We compare the performance on this task to a model's performance when fine-tuned on the following task.
Results
On the prompt involving just Jackie, we fine-tuned with 1200 examples, 200 for each depth between 2 and 7.
The model learned up to depth 5 (add, multiply, add, multiply, add).
On the prompt involving both Jackie and Gabriel, we trained on 1700 examples, where the number of operations for each person varied between 1 and 4.
On this problem we did not get evidence for parallel reasoning. In the one pile of rocks version the model learned with somewhat good accuracy up to depth 5. Depth 5 required three additions and two multiplications to arrive at the answer. In the sum of two piles of rocks version the model has somewhat learned at least depth 3+1 problems, and has possibly non-trivial (but low) accuracy at 3+2.
The number of operations for two piles at depth 3+2 is one larger than for one pile at depth 5. However, for one pile the numbers the model does computation on are larger. We do not thus think this is much evidence in favor of parallel reasoning.
We also considered the same problem with a more mathematical formulation. Example prompt:
The results were broadly similar: With a total of 1200 training examples, the model again learned to compute one expression up to depth 5. For two expressions, the model again could compute expressions up to depth 3+1, i.e. ((a+b)⋅c+d)+(e+f), and had poor accuracy at depth 3+2.
For curiosity we also tested the model's ability to calculate sums of many integers. We found that, as the amount of fine-tuning was increased, the model learned to compute sums of at least ~50 numbers, without apparently hitting any limitations. This is indicative of the model using different methods to compute mere sums than more involved mathematical expressions.
Linearly recursive sequences
Problem statement
Our prompts were as follows:
and
Results
We considered a few different fine-tuning setups based on how large the initial terms could be, how large the coefficients in the recursion could be, whether negative coefficients or terms were allowed, etc.
At best, for the case of two sequences the model couldn't even compute up to depth (1, 2), i.e. calculate a3+b4. In this case the initial terms a1,a2,b1,b2 varied between 1 and 8, coefficients were sampled from {−1,1,2,3}, and sequences were filtered to be increasing. We fine-tuned the model on 1200 examples, where we asked for ak+bm for varying 3≤k,m≤5.
On almost all variations we tried, the model did not even learn to compute a_3 + b_3 with a 1000 fine-tuning examples.
For one linear recursion, we trained on 900 sequences with initial terms in [−15,15] and coefficients between -5 and 5, filtered to be increasing and for the answer to be less than 5000.
Overall we found that this task is hard for the model. We tried to make the problem easier by making the numbers smaller, but ran into the issue of having too few distinct training examples. In any case we didn't find evidence for parallel reasoning in this task, either.
Conclusion and future work
When testing GPT-3.5's one forward pass performance on three algorithmic problems that require serial steps, we found no evidence that making the problem more parallelizable improves the model's performance. That is, we found limited ability at "thinking about two things at once".
We think of the results here as quite preliminary. We only focused on algorithmic problems (as this allowed us to construct inherently serial problems). However, this is not where the most impressive capabilities of language models lie at, and performance in such tasks is far from the strategically most important capabilities. It is very unclear whether the lack of ability to think of many things at once carries over to strategically relevant situations. We would like there to be research which illuminates this.[5]
Besides that, we also think it'd be interesting to study the ability of deep learning models (transformer-based or otherwise) to perform internal search and the capability of transformer-based models to use multiple tokens' residual streams for useful (hidden) cognition. We are aware of some research already in these directions (see "Related work" below), and think there's room for more useful work here.
Author contributions
meemi implemented and iterated on the experiments and wrote the results.
Olli Järviniemi provided guidance on experiment design and edited the post.
Related work
For more on capabilities (and other related phenomena) of one forward-pass in transformer-based language models, see:
See also Hidden Cognition Detection Methods and Benchmarks, Paul Colognese, LessWrong, on hidden cognition in general.
There is also plenty of work in benchmarking LLM capabilities, with many benchmarks taking the form of multiple-choice questions (implying that the answer is produced in one forward-pass). A notable example: Measuring Massive Multitask Language Understanding, Dan Hendrycks et al., arXiv:2009.03300.
We found that the exact value of n does not matter much; the model trained on one instance generalized from n=30 to n=50 without much loss in accuracy, for example.
The model, surprisingly, performs better on the (1, 2) task than the easier (0, 2), though the difference in accuracy is small. This is likely due to the fine-tuning file containing more examples of the (1, 2) variation.
This observation remained consistent in multiple fine-tuning runs. We do not know why this happens.
We first tried randomizing between a sum and a subtraction, and a multiplication and a division, but the problem seemed too difficult, so we included only sums and multiplication.
Here are a couple of follow-up hypotheses/questions one could investigate:
1: It is conceivable that models would be better at performing parallel reasoning on two very dissimilar tasks, as those tasks might use different "parts" of the residual stream.
2: Given a large number (e.g. 10 or 100) factual statements, can a model consistently answer whether all of them are true? (Compare to the "needle in a haystack" test.) If yes, can one replace assessment of factual accuracy with more difficult tasks while retaining performance?
3: If a model is situated in a complicated environment, how well can it "hold it in its mind" and think of all relevant contextual factors at the same time? (We admittedly do not have a clear vision of what exactly such an evaluation would look like.)