Although not "circuit-style," this could also be considered one of these attempts outlined by Mack et al. 2024.
https://www.lesswrong.com/posts/ioPnHKFyy4Cw2Gr2x/#:~:text=Unsupervised%20steering%20as,more%20distributed%20circuits.
some issues related to causal interpretations
Could you refer to the line you are referring to from Marks et al.?
Do you also conclude that the causal role of the circuit you discovered was spurious? What's a better way to incorporate the mentioned sample-level variance in measuring the effectiveness of an SAE feature or SV? (i.e. should a good metric of causal importance satisfy both sample- and population-level increase?)
Could you also link to an example where causal intervention satisfied the above-mentioned (or your own alternative that was not mentioned in this post) criteria?
Is there a codebase for the supervised dictionary work?
"I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes."
As artificial intelligence continues to advance and demonstrate increasingly impressive creative capabilities, it raises important questions about the role and value of human creativity. While AI has the potential to enhance and augment human creativity in many ways, it also threatens to compress or diminish it in certain domains. This essay explores the complex interplay between AI and human creativity,...
Thank you for the feedback, and thanks for this.
Who else is actively pursuing sparse feature circuits in addition to Sam Marks? I'm curious because the code breaks in the forward pass of the linear layer in gpt2 since the dimensions are different from Pythia's (768).
I agree with you there. There are numerous benefits of being an autodidact (freedom to learn what you want, less pressure from authorities), but formal education offers more mentorship. For most people, the desire to learn something is often not enough even with the increased accessibility of information, as the material gets more complex.
Do you see possible dangers of closed-loop automated interpretability systems as well?
the only aligned AIs are those which are digital emulations of human brains
I don't think this is necessarily true. I don't think emulated human brains are necessary for full alignment, nor whether emulated human brains would be more aligned than a well calibrated and scaled up version of our current alignment techniques (+ new ones to be discovered in the next few years). To emulate the entire human brain to align values seem to be not only implausible (even with neurmorphic computing, efficient neural networks and Moore's law^1000), it seems like an overk...
What's the difference between "having a representation" for uppercase/lowercase and using the representation to solving MCQ or AB test? From your investigations, do you have intuitions as to what might be the mechanism of disconnect? I'm interested in seeing what might cause these models to perform poorly, despite having representations that seem to be relevant to solving the task, at least to us people.
Considering that the tokenizer architecture for Mistral-7B probably includes a case-sensitive dictionary (https://discuss.huggingface.co/t/case-sensitivity...
Hey, thanks for the reply. Yes, we tried k-means and agglomerative clustering and they worked with some mixed results.
We'll try PaCMAP instead and see if it is better!