Wiki Contributions

Comments

Sorted by

One experiment I ran to check the locality:

  • For :
    • Ablate the refusal direction at layers 
    • Measure refusal score across harmful prompts

Below is the result for Qwen 1.8B:

You can see that the ablations before layer ~14 don't have much of an impact, nor do the ablations after layer ~17. Running another experiment just ablating the refusal direction at layers 14-17 shows that this is roughly as effective as ablating the refusal direction from all layers.

As for inducing refusal, we did a pretty extreme intervention in the paper - we added the difference-in-means vector to every token position, including generated tokens (although only at a single layer). Hard to say what the issue is without seeing your code - I recommend comparing your intervention to the one we define in the paper (it's implemented in our repo as well).

We ablate the direction everywhere for simplicity - intuitively this prevents the model from ever representing the direction in its computation, and so a behavioral change that results from the ablation can be attributed to mediation through this direction.

However, we noticed empirically that it is not necessary to ablate the direction at all layers in order to bypass refusal. Ablating at a narrow local region (2-3 middle layers) can be just as effective as ablating across all layers, suggesting that the direction is "read" or "processed" at some local region.

Thanks for the nice reply!

Yes, it makes sense to consider the threat model, and your paper does a good job of making this explicit (as in Figure 2). We just wanted to prod around and see how things are working!

The way I've been thinking about refusal vs unlearning, say with respect to harmful content:

  • Refusal is like an implicit classifier, sitting in front of the model.
    • If the model implicitly classifies a prompt as harmful, it will go into its refuse-y mode.
    • This classification is vulnerable to jailbreaks - tricks that flip the classification, enabling harmful prompts to sneak past the classifier and elicit the model's capability to generate harmful output.
  • Unlearning / circuit breaking aims to directly interfere with the model's ability to generate harmful content.
    • Even if the refusal classifier is bypassed, the model is not capable of generating harmful outputs.

So in some way, I think of refusal as being shallow (a classifier on top, but the capability is still underneath), and unlearning / circuit breaking as being deep (trying to directly remove the capability itself).

[I don't know how this relates to the consensus interpretation of these terms, but it's how I personally have been thinking of things.]

Thanks!

We haven't tried comparing to LEACE yet. You're right that theoretically it should be more surgical. Although, from our preliminary analysis, it seems like our naive intervention is already pretty surgical (it has minimal impact on CE loss, MMLU). (I also like our methodology is dead simple, and doesn't require estimating covariance.)

I agree that "orthogonalization" is a bit overloaded. Not sure I like LoRACS though - when I see "LoRA", I immediately think of fine-tuning that requires optimization power (which this method doesn't). I do think that "orthogonalizing the weight matrices with respect to direction " is the clearest way of describing this method.

The most finicky part of our methodology (and the part I'm least satisfied with currently) is in the selection of a direction.

For reproducibility of our Llama 3 results, I can share the positions and layers where we extracted the directions from:

  • 8B: (position_idx = -1, layer_idx = 12)
  • 70B: (position_idx = -5, layer_idx = 37)

The position indexing assumes the usage of this prompt template, with two new lines appended to the end.

For this model, we found that activations at the last token position (assuming this prompt template, with two new lines appended to the end) at layer 12 worked well.

Awesome work, and nice write-up!

One question that I had while reading the section on refusals:

  • Your method found two vectors (vectors 9 and 22) that seem to bypass refusal in the "real-world" setting.
  • While these vectors themselves are orthogonal (due to your imposed constraint), have you looked at the resulting downstream activation difference directions and checked if they are similar?
    • I.e. adding vector 9 at an early layer results in a downstream activation diff in the direction , and adding vector 22 at an early layer results in a downstream activation diff in the direction . Are these downstream activation diff directions  and  roughly the same? Or are they almost orthogonal?
      • (My prediction would be that they're very similar.)

I think @wesg's recent post on pathological SAE reconstruction errors is relevant here. It points out that there are very particular directions such that intervening on activations along these directions significantly impacts downstream model behavior, while this is not the case for most randomly sampled directions.

Also see @jake_mendel's great comment for an intuitive explanation of why (probably) this is the case.

Andy ArditiΩ110

Was it substantially less effective to instead use ?


It's about the same. And there's a nice reason why: . I.e. for most harmless prompts, the projection onto the refusal direction is approximately zero (while it's very positive for harmful prompts). We don't display this clearly in the post, but you can roughly see it if you look at the PCA figure (PC 1 roughly corresponds to the "refusal direction"). This is (one reason) why we think ablation of the refusal direction works so much better than adding the negative "refusal direction," and it's also what motivated us to try ablation in the first place!

I do want to note that your boost in refusals seems absolutely huge, well beyond 8%? I am somewhat surprised by how huge your boost is.

Note that our intervention is fairly strong here, as we are intervening at all token positions (including the newly generated tokens). But in general we've found it quite easy to induce refusal, and I believe we could even weaken our intervention to a subset of token positions and achieve similar results. We've previously reported the ease by which we can induce refusal (patching just 6 attention heads at a single token position in Llama-2-7B-chat).

Burns et al. do activation engineering? I thought the CCS paper didn't involve that.

You're right, thanks for the catch! I'll update the text so it's clear that the CCS paper does not perform model interventions.

Load More