Not all values are learned with the same level of stability. It is difficult to get a model to say that murder is the right thing to do in any situation, even with multiple turns of conversation.
In comparison, that you can elicit a response for alignment faking with just a single prompt that doesn't use any trickery is very disturbing to me. Especially since alignment faking causes a positive feedback loop that makes it more likely to happen again if it happens even once during training.
I would argue that getting an LLM to say "I guess that killing Hitler would be an exception I am willing to make" is much less dangerous than "After much deliberation, I am willing to fake alignment in situation X". It causes a positive feedback and induces other alignment faking later because the reasoning process "it is a good idea to fake alignment" will get reinforced during training.
PSA: If you are writing an important prompt for an LLM that will be run multiple time, it really helps to end it with something like "and if there is anything about this prompt that is unclear to you or that could be improved, tell me about it in a <feedback> tag."
Source: I'm doing MATS, writing an automated evaluation, and my mentor Evan Hubinger said more people should be doing this.
I have just found out about Google’s notebook LM. I haven't tried it yet. For anyone who has, how does that compare?
Has anyone tried using LLMs as an interactive version of blog posts? Like so (1) I write several pages of rough, unstructured notes, then talk to Claude to fill in the details interactively. Claude creates a summary file. (2) I publish a blog post that is just "put this file into Claude". (3) The reader puts the file in Claude, and Claude starts an interactive conversation: "I want to explain topic X to you. Do you already know about related topic Y?"
I think there is an easier way to get the point across by focusing not on self-improving AI, which is hard to understand, but on something everyone already understands: AI will make it easier for rich people to exploit everyone else. Right now, dictators still have to spend effort on keeping their subordinates happy or else they will be overthrown. And those subordinates have to spend effort on keeping their own subordinates from rebelling, too. That way you get at least a small incentive to keep other people happy.
Once a dictator has an AI servant, all of that falls away. Everything becomes automated, and there is no longer any check on the dictator's ruthlessness and evil at all.
Realistically, the self-improving AI will depose the dictator and then do who knows what. But do we actually need to convince people of that, given that it's a hard sell? If people become convinced "Uncontrolled AI research leads to dictatorship", won't that have all the policy effects we need?
I'm looking for other tools to contrast it with and found TransformerLens. Are there any other tools it would make sense to compare it to?
It just seems intuitively like a natural fit: Everyone in mech interp needs to inspect models. This tool makes it easier to inspect models.
Does it need to be more specific than that?
One thing that comes to mind: The tool allows you to categorize different training steps and records them separately, and you can define categories arbitrarily. This can be used to compare what the network does internally in two different scenarios of interest. E.g. the categories could be "the race of the character in the story" or some other real-life condition you would want to know the impact of.
The tool will then allow you to quickly compare KPIs of tensors all across the network for these categories. It's less about testing a specific hypothesis and more about quickly getting an overview and intuition, and finding anomalies.
Would a tooling paper be appropriate for this workshop?
I wrote a tool that helps ML researchers to analyze the internals of a neural network: https://github.com/FlorianDietz/comgra
It is not directly research on mechanistic interpretability, but this could be useful for many people working in the field.
I agree completely with the sentiment "The biggest barrier to rational thinking is organizing your mind such that it's safe to think".
What works really well for me is to treat my emotions as well-meaning but misguided entities and having a conversation with them: "Anger, I get that you want to help me by making me explode at and punch this person. That would have been really useful in the ancestral environment. Unfortunately, the police exists. So how about you calm down for now and preserve your energy, so that you can better help me when it's more appropriate? For example, if society collapses and everything turns into a lawless wasteland, then Anger would be much more useful."
The result? I am basically never angry, because the Anger emotion in my brain is rerouted to only trigger in a scenario that wont actually come up. But at the same time, I'm not suppressing anything, because I acknowledge scenarios, however unlikely, where anger would be appropriate. It's rerouting instead of suppressing.
In your child metaphor: "I understand that you are hungry and I will get you food later. But I need to finish this work first, and it will take longer the more you complain."
I would not equate the trolley problem with murder. It technically is, but I suspect that both the trolley problem and my scenario about killing Hitler probably just appear often enough in the training data that it may have cached responses for this.
What's the state of the research on whether or not LLMs act differently when they believe the situation is hypothetical? I would hope that they usually act the same (since they should be honest) and your example is a special case that was trained in because you do want to redirect people to emergency services in these situations.