One question I have about the Mechanistic Interpretability safety case sketch from Roger Grosse: How do you actually find features of high concern?
Features of High Concern are things like "intent to carry out a harmful plan". By definition, they very rarely activate on benign deployment outputs. This must be the case as "if the features [of high concern] activate more than a handful of times in deployment (internal or external), we discontinue the model and roll back to a weaker model." [M1.1]
They propose to find these features [S1.1] by pe...
Just checking -- you are aware that the reasoning traces shown in the UI are a summarized version of the full reasoning trace (which is not user accessible)?
See "Hiding the Chains of Thought" here.
See also their system card focusing on safety evals: https://openai.com/index/openai-o1-system-card/
Surprising misuse and alignment relevant excerpts:
METR had only ~10 days to evaluate.
Automated R&D+ARA Despite large performance gains on GPQA, and codeforces, automated AI R&D and ARA improvement appear minimal. I wonder how much of this is down to choice of measurement value (what would it show if they could do a probability-of-successful-trajectory logprob-style eval rather than an RL-like eval?). c.f. Fig 3 and 5. Per the system card, METR's eval is ongoing, but I worry about under-estimation here, Devin developers show extremely quick improvem...
Have you looked at how the dictionaries represent positional information? I worry that the SAEs will learn semi-local codes that intermix positional and semantic information in a way that makes things less interpretable.
To investigate this one can take each feature and could calculate the variance in activations that can explained by the position. If this variance-explained is either ~0% or ~100% for every head I'd be satisfied that positional and semantic information are being well separated into separate features.
In general, I think it makes sense to spe...
Cool paper! I enjoyed reading it and think it provides some useful information on what adding carefully chosen bias vectors into LLMs can achieve. Some assorted thoughts and observations.
My summary of the paper:
Yes I'm excited about "probing the answer to followup questions" as a general technique. Our results were promising but there's a lot of iteration one could do to make the technique work better!
On our website you can look at the probe scores on the CoT. I don't have up to date numbers, but I expect it's mildly helpful for improving classification at le... (read more)