Arthur Conmy

Intepretability 

Views my own

Sequences

Attention Output SAEs

Wiki Contributions

Comments

Sorted by

I suggested something similar, and this was the discussion (bolding is the important author pushback):
 

Arthur Conmy

11:33 1 Dec

Why can't the YC company not use system prompts and instead:

1) Detect whether regex has been used in the last ~100 tokens (and run this check every ~100 tokens of model output)

2) If yes, rewind back ~100 tokens, insert a comment like # Don't use regex here (in a valid way given what code has been written so far), and continue the generation

Dhruv Pai

10:50 2 Dec

This seems like a reasonable baseline with the caveat that it requires expensive resampling and inserting such a comment in a useful way is difficult. 

When we ran baselines simply repeating the number of times we told the model not to use regex right before generation in the system prompt, we didn't see the instruction following improve (very circumstantial evidence). I don't see a principled reason why this would be much worse than the above, however, since we do one-shot generation with such a comment right before the actual generation.

The [Sparse Feature Circuits] approach can be seen as analogous to LoRA (Hu et al., 2021), in that you are constraining your model's behavior

 

FWIW I consider SFC and LoRA pretty different, because in practice LoRA is practical, but it can be reversed very easily and has poor worst-case performance. Whereas Sparse Feature Circuits is very expensive, requires far more nodes in bigger models (forthcoming, I think), or requires only studying a subset of layers, but if it worked would likely have far better worst-case performance. 

This makes LoRA a good baseline for some SFC-style tasks, but the research experience using both is pretty different. 

I assume all the data is fairly noisy, since scanning for the domain I know in https://raw.githubusercontent.com/Oscar-Delaney/safe_AI_papers/refs/heads/main/Automated%20categorization/final_output.csv, it misses ~half of the GDM Mech Interp output from the specified window and also mislabels https://arxiv.org/abs/2208.08345 and https://arxiv.org/abs/2407.13692 as Mech Interp (though two labels are applied to these papers and I didn't dig to see which was used)

Arthur ConmyΩ230

> think hard about how joining a scaling lab might inhibit their future careers by e.g. creating a perception they are “corrupted”

Does this mean something like:

1. People who join scaling labs can have their values drift, and future safety employers will suspect by-default that ex-scaling lab staff have had their values drift, or
 

2. If there is a non-existential AGI disaster, scaling lab staff will be looked down upon

or something else entirely?

This is a great write up, thanks! Has their been any follow up from the paper's authors?

This seems a pretty compelling takedown to me which is not addressed by the existing paper (my understanding of the two WordNet experiments not discussed in post is: Figure 4 concerns whether under whitening a concept can be linearly separated (yes) and so the random baseline used here does not address the concerns in this post; Figure 5 shows that the whitening transformation preserves some of the word net cluster cosine sim, but moreover on the right basically everything is orthogonal, as found in this post).

This seems important to me since the point of mech interp is to not be another interpretability field dominated by pretty pictures (e.g. saliency maps) that fail basic sanity checks (e.g. this paper for saliency maps). (Workshops aren't too important, but I'm still surprised about this)

Arthur ConmyΩ120

My current best guess for why base models refuse so much is that "Sorry, I can't help with that. I don't know how to" is actually extremely common on the internet, based on discussion with Achyuta Rajaram on twitter: https://x.com/ArthurConmy/status/1840514842098106527

This fits with our observations about how frequently LLaMA-1 performs incompetent refusal

Arthur ConmyΩ330

> Qwen2 was explicitly trained on synthetic data from Qwen1.5

~~Where is the evidence for this claim? (Claude 3.5 Sonnet could also not find evidence on one rollout)~~
EDITED TO ADD: "these [Qwen] models are utilized to synthesize high-quality pre-training data" is clear evidence, I am being silly.


All other techinques mentioned here (e.g. filtering and adding more IT data at end of training) still sound like models "trained to predict the next word on the internet" (I don't think the training samples being IID early and late in training is an important detail)

The Improbability Principle sounds close. The summary seems to suggest law of large numbers is one part of the pop science book, but admittedly some of the other parts ("probability lever") seem less relevant

Arthur ConmyΩ461

Is DM exploring this sort of stuff?

 

Yes. On the AGI safety and alignment team we are working on activation steering - e.g. Alex Turner who invented the technique with collaborators is working on this, and the first author of a few tokens deep is currently interning on the Gemini Safety team mentioned in this post. We don't have sharp and fast lines between what counts as Gemini Safety and what counts as AGI safety and alignment, but several projects on AGI safety and alignment, and most projects on Gemini Safety would see "safety practices we can test right now" as a research goal.

Load More