Louis Jaburi

https://cogeometry.com/
 

Wikitag Contributions

Comments

Sorted by

I agree with the previous points, but I would also add historical events that led to this.
Pre-WW I Germany was much more important and plays the role that France is playing today (maybe even more central), see University of Göttingen at the time.

After two world wars the German mathematics community was in shambles, with many mathematicians fleeing during that period (Grothendieck, Artin, Gödel,...). The university of Bonn (and the MPI) were the post-war project of Hirzebruch to rebuild the math community in Germany. 

I assume France then was then able to rise as the hotspot and I would be curious to imagine what would have happened in an alternative timeline. 

In our toy example, I would intuitively associate the LLC with the test losses rather than train loss. For training of a single model, it was observed that test loss and LLC are correlated. Plausibly, for this simple model (final) LLC, train loss, and test loss, are all closely related.

We haven't seen that empirically with usual regularization methods, so I assume there must be something special going on with the training set up.

I wonder if this phenomenon is partially explained by scaling up the embedding and scaling down the unembedding by a factor (or vice versa). That should leave the LLC constant, but will change L2 norm. 

The relevant question then becomes whether the "SGLD" sampling techniques used in SLT for measuring the free energy (or technically its derivative) actually converge to reasonable values in polynomial time. This is checked pretty extensively in this paper for example.

The linked paper considers only large models which are DLNs. I don't find this too compelling evidence for large models with non-linearities. Other measurements I have seen for bigger/deeper non-linear models seem promising, but I wouldn't call them robust yet (though it is not clear to me if this is because of an SGLD implementation/hyperparameter issue or if there is a more fundamental problem here).

As long as I don't have a more clear picture of the relationship between free energy and training dynamics under SGD, I agree with OP that the claim is too strong.

Did you use something like  as described here ? By brittle do you mean w.r.t the sparsity penality (and other hyperparameters)?

Thanks for the reference, I wanted to illuminate the value of gradients of activations in this toy example as I have been thinking about similar ideas.

I personally would be pretty excited about attribuition dictionary learning, but it seems like nobody did that on bigger models yet.

Are you suggesting that there should be a formula similar to the one in Proposition 5.1 (or 5.2) that links information about the activations  with the LC as measure of basin flatness?

I played around with the  example as well and got similar results. I was wondering why there are two more dominant PCs: If you assume there is no bias, then the activations will all look like 

 or  and I checked that the two directions found by the PC approximately span the same space as . I suspect something similar is happening with bias.

In this specific example there is a way to get the true direction w_out from the activations: By doing a PCA on the gradient of the activations. In this case, it is easily explained by computing the gradients by hand: It will be a multiple of w_out. 

Using ZIP as compression metric for NNs (I assume you do something along the lines of "take all the weights and line them up and then ZIP") is unintuitive to me for the following reason:
ZIP, though really this should apply to any other coding scheme that just tries to compress the weights by themselves, picks up on statistical patterns in the raw weights. But NNs are not just simply a list of floats, they are arranged in highly structured manner. The weights themselves get turned into functions and it is 1.the  functions, and 2. the way the functions interact that we are ultimately trying to understand (and therefore compress).

To wit, a simple example for the first point : Assume that inside your model is a 2x2 matrix with entries M=[0.587785, -0.809017, 0.809017, 0.587785]. Storing it like this will cost you a few bytes and if you compress it you can ~ half the cost I believe. But really there is a much more compact way to store this information: This matrix represents a rotation by 36 degrees. Storing it this way, requires less than 1 byte. 

This phenomenon should get worse for bigger models. One reason is the following: If we believe that the NN uses superposition, then there will be an overbasis in which all the computations are done (more) sparsly. And if we don't factor that in, then ZIP will not include such information (Caveat: This is my intuition, I don't have empirical results to back this up). 

I think ZIP might pick up some structure (see e.g. here), just as in my example above it would pick up some sort of symmetry. But your your decoder/encoder in your compression scheme should include/have access to more information regarding the model you are compressing. You might want to check out this post for an attempt at compressing model performance using interperetations.

Load More