Fixed, thanks!
Ok, so why not just train a model on fake anomaly detection/interp research papers? Fake stories about 'the bad AI that got caught', 'the little AI that overstepped', etc. I don't know how to word it, but this seems like something closer to intimidation than alignment, which I don't think makes much sense as a strategy intended to keep us all alive.
I don't know if this is just hindsight, but tracr has in no way turned out to be safety relevant. Was it considered to be so at the time of commenting?
(I think a lot of unlearning research is bullshit, but besides that, is anyone deploying large models doing unlearning?)
Why do you think this? Is there specific research you have in mind? Some kind of reference would be nice. In the general case, it seems to me that unlearning matters because knowing how to effectively remove something from a model is just the flip-side of understanding how to instill values. Although not the primary goal of unlearning, work into how to 'remove' should also equally benefit attempts to 'instill' robust values into the mo...
Networks that have to learn more features may become more adversary-prone simply because the adversary can leverage more features which are represented more densely.
Also, in the top figure the loss is 'relative to the non-superposition model', but if I'm not mistaken the non-superposition model should basically be perfectly robust. Because it's just one layer, its Jacobian would be the identity, and because the loss is MSE, any perturbation to the input would be perfectly reflected only in the correct output feature, meaning no change in loss w...
Yep, thanks for the suggestion. I also think Zach's comment is very helpful and I'm planning to edit the post to include this and some of the stuff he mentioned.
To answer your other questions:
This is a nice idea and was one of the experiments I didn't get around to running, although I don't expect it to be the best way to integrate info... (read more)