LESSWRONG
LW

1542
neverix
173Ω232230
Message
Dialogue
Subscribe

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
SAE features for refusal and sycophancy steering vectors
neverix1y20
  1. It doesn't really make sense to interpret feature activation values as log probabilities. If we did, we'd have to worry about scaling. It's also not guaranteed the score wouldn't just decrease because of decreased accuracy on correct answers.
  2. Phi seems specialized for MMLU-like problems and has an outsized score for a model its size, I would be surprised if it's biased because of the format of the question. However, it's possible using answers instead of letters would help improve raw accuracy in this case because the feature we used (45142) seems to max-activate on plain text and not multiple choice answers and it's somewhat surprising it does this well on multiple-choice questions. For this reason, I think using text answers will boost the feature's score but won't change the relative ranking of features by accuracy. I don't know how to adapt your proposed experiment for the task of finding how accurate a feature is at eliciting the model's knowledge of correct answers.
  3. This is a technical detail. We use a jax.lax.scan by the layer index and store layers in one structure with a special named axis. This mainly improves compilation time. Penzai has since implemented the technique in the main library.
Reply
Self-explaining SAE features
neverix1y82
  • We use our own judgement as a (potentially very inaccurate) proxy for accuracy as an explanation and let readers look on their own at the feature dashboard interface. We judge using a random sample of examples at different levels of activation. We had an automatic interpretation scoring pipeline that used Llama 3 70B, but we did not use it because (IIRC) it was too slow to run with multiple explanations per feature. Perhaps it is now practical to use a method like this.
  • That is a pattern that happens frequently, but we're not confident enough to propose any particular form. It is sometimes thrown off by random spikes, self-similarity gradually rising at larger scales, or entropy peaking in the beginning. Because of this, there is still a lot of room for improvement in cases where a human (or maybe a peak-finding algorithm) could do better than our linear metric.
Reply
You should go to ML conferences
neverix1y52
  • Genie: Generative Interactive Environments Bruce et al.

How is that paper alignment-relevant?

Reply
Research Report: Alternative sparsity methods for sparse autoencoders with OthelloGPT.
neverix1y30

Freshman’s dream sparsity loss

A similar regularizer is known as Hoyer-Square.

Pick a value for k and a small ϵ≥0.  Then define the activation function Tk,ϵ in the following way.  Given a vector x, let b be the value of the kth-largest entry in x.  Then define the vector Tk,ϵ(x) by 

Is a in the following formula a typo?

Reply
200 COP in MI: Exploring Polysemanticity and Superposition
neverix2y10

To clarify, I thought it was about superposition happening inside the projection afterwards.

Reply
200 COP in MI: Exploring Polysemanticity and Superposition
neverix2y10

This happens in transformer MLP layers. Note that the hidden dimen

Is the point that transformer MLPs blow up the hidden dimension in the middle?

Reply
Steering GPT-2-XL by adding an activation vector
neverix2y10

Activation additions in generative models

 

Also related is https://arxiv.org/abs/2210.10960. They use a small neural network to generate steering vectors for the UNet bottleneck in diffusion to edit images using CLIP.

Reply
The Low-Hanging Fruit Prior and sloped valleys in the loss landscape
neverix2y30

From a conversation on Discord:

Do you have in mind a way to weigh sequential learning into the actual prior?

Dmitry:

good question! We haven't thought about an explicit complexity measure that would give this prior, but a very loose approximation that we've been keeping in the back of our minds could be a Turing machine/Boolean circuit version of the "BIMT" weight penalty from this paper https://arxiv.org/abs/2305.08746 (which they show encourages modularity at least in toy models)

Response:

Hmm, BIMT seems to only be about intra-layer locality. It would certainly encourage learning an ensemble of features, but I'm not sure if it would capture the interesting bit, which I think is the fact that features are built up sequentially from earlier to later layers and changes are only accepted if they improve local loss.

I'm thinking about something like an existence of a relatively smooth scaling law (?) as the criterion.

So, just some smoothness constraint that would basically integrate over paths SGD could take.

Reply
Inference from a Mathematical Description of an Existing Alignment Research: a proposal for an outer alignment research program
neverix2y10

We can idealize the outer alignment solution as a logical inductor.

Why outer?

Reply
The "spelling miracle": GPT-3 spelling abilities and glitch tokens revisited
neverix2y10

You could literally go through some giant corpus with an LLM and see which samples have gradients similar to those from training on a spelling task.

Reply
Load More
28Lessons from a year of university AI safety field building
3mo
3
22Evolutionary prompt optimization for SAE feature visualization
Ω
10mo
Ω
0
29SAE features for refusal and sycophancy steering vectors
Ω
1y
Ω
4
31Extracting SAE task features for in-context learning
Ω
1y
Ω
1
62Self-explaining SAE features
Ω
1y
Ω
13