This is a linkpost for https://arxiv.org/abs/2501.18812

(adapted from Nora's tweet thread here.)

What are the chances you'd get a fully functional language model by randomly guessing the weights?

We crunched the numbers and here's the answer:

Image

We've developed a method for estimating the probability of sampling a neural network in a behaviorally-defined region from a Gaussian or uniform prior.

You can think of this as a measure of complexity: less probable, means more complex.

It works by exploring random directions in weight space, starting from an "anchor" network.

The distance from the anchor to the edge of the region, along the random direction, gives us an estimate of how big (or how probable) the region is as a whole.

Image

But the total volume can be strongly influenced by a small number of outlier directions, which are hard to sample in high dimension— think of a big, flat pancake.

Importance sampling using gradient info helps address this issue by making us more likely to sample outliers.

Image

We find that the probability of sampling a network at random— or local volume for short— decreases exponentially as the network is trained.

And networks which memorize their training data without generalizing have lower local volume— higher complexity— than generalizing ones.

Image

We're interested in this line of work for two reasons:

First, it sheds light on how deep learning works. The "volume hypothesis" says DL is similar to randomly sampling a network from weight space that gets low training loss. (This is roughly equivalent to Bayesian inference over weight space.) But this can't be tested if we can't measure volume.

Second, we speculate that complexity measures like this be useful for detecting undesired "extra reasoning" in deep nets. We want networks to be aligned with our values instinctively, without scheming about whether this would be consistent with some ulterior motive https://arxiv.org/abs/2311.08379

Our code is available (and under active development) here.

New Comment
7 comments, sorted by Click to highlight new comments since:

How does the performance of this compare to the SGLD sampling approach used by Timaeus, or to bounding the volume by just calculating the low-lying parts of the Hessian eigenspectrum? Or, to go even hackier and cheaper, just guessing the Hessian eigenspectrum with kfac-approximation by doing a PCA of the activations and gradients at every layer and counting the zero eigenvalues of those?

(For all of those approaches, I'd use the loss landscape/Hessian of the behavioural loss defined in section 2.2 of that last link, since you want to measure the volume of a behavioural region.)

If you're wondering if this has a connection to Singular Learning Theory: Yup!

In SLT terms, we've developed a method for measuring the constant (with respect to n) term in the free energy, whereas LLC measures the log(n) term. Or if you like the thermodynamic analogy, LLC is the heat capacity and log(local volume) is the Gibbs entropy.

We're now working on better methods for measuring these sorts of quantities, and on interpretability applications of them.

'Local volume' should also give a kind of upper bound on the LLC defined at finite noise though, right? Since as I understand it, what you're referring to as the volume of a behavioral region here is the same thing we define via the behavioural LLC at finite noise scale in this paper? And that's always going to be bigger or equal to the LLC taken at the same point at the same finite noise scale.

Let  be volume of a behavioral region at cutoff . Your behavioral LLC at finite noise scale is , which is invariant under rescaling  by a constant. This information about the overall scale of  seems important. What's the reason for throwing it out in SLT?

Because it’s actually not very important in the limit. The dimensionality of V is what matters. A 3-dimensional sphere in the loss landscape always takes up more of the prior than a 2-dimensional circle, no matter how large the area of the circle is and how small the volume of the sphere is.

In real life, parameters are finite precision floats, and so this tends to work out to an exponential rather than infinite size advantage. So constant prefactors can matter in principle. But they have to be really really big. 

Indeed, very interesting!

also rhymes with/is related to ARC's work on presumption of independence applied to neural networks (e.g. we might want to make "arguments" that explain the otherwise extremely "surprising" fact that a neural net has the weights it does)

Curated and popular this week