All of eca's Comments + Replies

eca10

Really appreciate the response :)

Totally acknowledge the limitations you outlined.

I was aiming to construct an example which would illustrate how the loss metric would break in a black box setting (where X and Y are too gnarly to vis). In that case you have no clue that your model implements sin(x), and so I dont see how that could be the goal. In the black box setting you do get access to distance between scrubbed y and y_true (loss) and distance between scrubbed_y and original_y (my proposal, lets call it output distance). When you look at loss, it is po... (read more)

2LawrenceC
It doesn't actually fix the problem! Suppose that your model behavior worked as follows: f(x)=sin(x)+sin(x)−sin(x) That is, there are three components, two of which exhibit the behavior and one of which is inhibitory. (For example, we see something similar in the IOI paper with name movers and backup name movers.) Then if you find a single circuit of the form sin(x), you would still be missing important parts of the network. That is, close model outputs doesn't guarantee that you've correctly captured all the considerations, since you can still miss considerations that "cancel out". (Though they will have fewer false positives.) However, swapping from low loss to close outputs requires sacrificing other nice properties you want. For example, while loss is an inherently meaningful metric, KL distance or L2 distance to the original outputs is rarely the thing you care about. And the biggest issue is that you have to explain a bunch of noise, which we might not care about.  Of course, I still encourage people to think about what their metrics are actually measuring, and what they could be failing to capture. And if your circuit is good according to one metric but bad according to all of the others, there's a good chance that you've overfit to that metric!
ecaΩ010

Great stuff! Excited to see this extended and applied. I hope to dive deeper into this series and your followup work.

Came to the appendix for 2.2 on metrics, still feel curious about the metric choice.

I’m trying to figure out why this is wrong: “loss is not a good basis for a primary metric even though its worth looking at and intuitive, because it hides potentially large+important changes to the X-> Y mapping learned by the network that have equivalent loss. Instead, we should just measure how yscrubbed_i has changed from yhat_i (original model) at eac... (read more)

2LawrenceC
I think this is a great question. First, we do mention this a bit in 2.2:  We also discuss this a bit in the Limitations section of the main post, specifically, the part starting: That being said, it's worth noting that your performance metric gives you an important degree of freeodm. In your case, if the goal is to explain "why the predictor explains y = sin(x)", it makes more sense to use the performance metric f(x) = |sin(x) - model(x)|. If you use the metric (model(x) - y|)^2 you're trying to explain, why does the predictor sin(x) do as well (as poorly) as it does. In which case, yes, cos(x) does as poorly as sin(x) on the data.