Jan Wehner

I'm a PhD student working on AI Safety at the CISPA Helmholtz Center for Information Security. Currently, I'm working on Activation Engineering.

Feel free to contact me: jan.wehner@cispa.de

Wiki Contributions

Comments

Sorted by

Thanks for writing this, I think it's great to spell out the ToI behind this research direction!

You touch on this, but I wanted to make it explicit: Activation Engineering can also be used for detecting when a system is "thinking" about some dangerous concept. If you have a steering vector for e.g. honesty, you can measure the similarity with the activations during a forward pass to find out whether the system is being dishonest or not.

You might also be interested in my (less thorough) summary of the ToIs for Activation Engineering.

Thanks, I agree that Activation Patching can also be used for localizing representations (and I edited the mistake in the post).

Jan WehnerΩ010

Hey Christopher, this is really cool work. I think your idea of representation tuning is a very nice way to combine activation steering and fine-tuning. Do you have any intuition as to why fine-tuning towards the steering vector sometimes works better than simply steering towards it?

If you keep on working on this I’d be interested to see a more thorough evaluation of capabilities (more than just perplexity) by running it on some standard LM benchmarks. Whether the model retains its capabilities seems important to understand the safety-capabilities trade-off of this method.

I’m curious whether you tried adding some way to retain general capabilities into the loss function with which you do representation-tuning? E.g. to regularise the activations to stay closer to the original activations or by adding some standard Language Modelling loss?

As a nitpick: I think when measuring the Robustness of Tuned models the comparison advantages the honesty-tuned model. If I understand correctly the honesty-tuned model was specifically trained to be less like the vector used for dishonesty steering, whereas the truth-tuned model hasn’t been. Maybe a more fair comparison would be using automatic adversarial attack methods like GCG?

Again, I think this is a very cool project!

I agree that focusing too much on gathering data now would be a mistake. I believe thinking about data for IRL now is mostly valuable to identify challenges which make IRL hard. Then we can try to develop algorithms that solve these challenges or find out IRL is not a tractable solution for alignment.

Thank you Erik, that was super valuable feedback and gives some food for thought. 

It also seems to me that humans being suboptimal planners and not knowing everything the AI knows seem like the hardest (and most informative) problems in IRL. I'm curious what you'd think about this approach for adressing the suboptimal planner sub-problem : "Include models from coginitive psychology about human decision in IRL, to allow IRL to better understand the decision process." This would give IRL more realistic assumptions about the human planner and possibly allow it to understand it's irrationalites and get to the values which drive behaviour.

Also do you have a pointer for something to read on preference comparisons?