All of Benjamin Wright's Comments + Replies

One explanation for pathological errors is feature suppression/feature shrinkage (link). I'd be interested to see if errors are still pathological even if you use the methodology I proposed for finetuning to fix shrinkage. Your method of fixing the norm of the input is close but not quite the same.

1wesg
Right, I suppose there could be two reasons scale finetuning works 1. The L1 penalty reduces the norm of the reconstruction, but does so proportionally across all active features so a ~uniform boost in scale can mostly fix the reconstruction 2. Due to activation magnitude or frequency or something else, features are inconsistently suppressed and therefore need to be scaled in the correct proportion. The SAE-norm patch baseline tests (1) but based on your results, the scale factors vary within 1-2x so seems more likely your improvements come more from (2). I don’t see your code but you could test this easily by evaluating your SAEs with this hook.  

The original perplexity of the LLM was ~38 on the open web text slice I used. Thanks for the compliments!