Senior Research Scientist at UK AISI working on frontier AI safety cases
Cool work! Reminds me a bit of my submission to the inverse scaling prize: https://tomekkorbak.com/2023/03/21/repetition-supression/
In practice I think using a trained reward model (as in RLHF), not fixed labels, is the way forward. Then the cost of acquiring the reward model is the same as in RLHF, the difference is primarily that PHF typically needs much more calls to the reward model than RLHF.
Thanks, I found the post quite stimulating. Some questions and thoughts:
Is LLM dynamics ergodic? I.e. is the time average equal to , the average page vector?.
One potential issue with this formalisation is that you always assume a prompt of size (so you need to introduce artificial "null tokens" if the prompt is shorter) and you don't give special treatment to the token <|endoftext|>
. For me, it would be more intuitive to consider LLM dynamics in terms of finite, variable length, token-level Markov chains (until <|endoftext|>
). While a fixed block size is actually being used during training, the LLM is incentivised to disregard anything before <|endoftext|>
. So these two prompts should induce the same distribution:
Document about cats.<|endoftext|>My name is
;
Document about dogs.<|endoftext|>My name is
.
Your formalisation doesn't account for this symmetry.
Dennett is spelled with "tt".
Note that a softmax-based LLM will always put non-zero probability on every token. So there are no strictly absorbing states. You're careful enough to define absorbing states as "once you enter, you are unlikely to ever leave", but then your toy Waluigi model is implausible. A Waluigi can always switch back to a Luigi.
I don't remember where I saw that, but something as dumb as subtracting the embedding of <|bad|>
might even work sometimes.
That's a good point. But if you're using a distilled, inference-bandwith-optimised RM, annotating your training data might be a fraction of compute needed for pretraining.
Also, the cost of annotation is constant and can be amortized over many training runs. PHF shares an important advantage of offline RL over online RL approaches (such as RLHF): being able to reuse feedback annotations across experiments. If you already have a dataset, running a hyperparameter sweep on it is as cheap as standard pretraining and in contrast with RLHF you don't need to recompute rewards.
For filtering it was 25% of best scores, so we effectively trained for 4 epochs.
(We had different threshold for filtering and conditional training, note that we filter at document level but condition at sentence level.)
Good question! We're not sure. The fact that PHF scales well with dataset size might provide weak evidence that it would scale well with model size too.
I'm guessing that poison-pilling the <|bad|> sentences would have a negative effect on the <|good|> capabilities as well?
That would be my guess too.
Have you tested the AI's outputs when run in <|bad|> mode instead of <|good|> mode?
We did, LMs tends to generate toxic text when conditioned on <|bad|>
. Though we tended to have a risk-aversive thresholds, i.e. we used <|good|>
for only about 5% safest sentences and <|bad|>
for the remaining 95%. So <|bad|>
is not bad all the time.
Here it would be helpful to know what the AI produces when prompted by <|bad|>.
That's a good point. We haven't systematically investigate difference in capabilities between<|good|>
and <|bad|>
modes, I'd love to see that.
Just before public release, one could delete the <|bad|> token from the tokenizer and the model parameters, so switching to evil mode would require rediscovering that token embedding.
Yeah, you could even block the entire direction in activation space corresponding to the embedding of the <|bad|>
token
Fair point, I'm using "compositional" in an informal sense different from the one in formal semantics, closer to what I called "trivial compositionally" in this paper. But I'd argue it's not totally crazy to call such preference models compositional and that compositionally here still has some resemblance to Montague's account of compositionally as homeomorphism: basically, you have
get_total_score(response) == sum([get_score(attribute) for attribute in decompose(response)])