There was a sign error somewhere, you should be getting + lambda and - (m-1). Regarding the integral from 0 to 1, since the powers involved are even you can do that and double it rather than -1 to 1 (sorry if this doesn't map exactly onto your calculation, I didn't read all the details).
Ok, the sign error was just in the end, taking the -log of the result of the integral vs. taking the log. fixed it, thanks.
Thanks, Ill look for the sign-error!
I agree, that K is symmetric around our point of integration, big the prior phi is not. We integrate over e-(nk)*phi, wich does not have have to be symetric, right?
Yes, good point, but if the prior is positive it drops out of the asymptotic as it doesn't contribute to the order of vanishing, so you can just ignore it from the start.
To see how much the minimal point contributes to the integral we can integrate it in its vicinity
I think you should be looking at the entire stable island, not just integrating from zero to one. I expect you could get a decent approximation with Lie transform perturbation theory, and this looks similar to the idea of macro-states in condensed matter physics, but I'm not knowledgeable in these areas.
−N∑i=1logp(yi|xi,w)
You have a typo, the equation after Free Energy should start with
Also the third line should be , not minus.
Also, usually people use for model parameters (rather than ). I don't know the etymology, but game theorists use the same letter (for "types" = models of players).
Epistemic status: I wrote the post mostly for myself, in the process of understanding the theory behind Singular Learning Theory, based on the SLT low lecture series. This is a proof sketch with the thoroughness level of an experimental physics lecture: enough to get the intuitions but consult someone else for details.
Background: Statistical Learning Theory
In the framework of statistical learning theory, we aim to establish a model that can predict an outcome y given an input x with a certain level of confidence. This prediction is typically governed by a set of parameters w , which need to be learned from the data. Let us begin by defining the primary components of this framework.
The Probabilistic Model
The probabilistic model p(y|x,w) represents the likelihood of observing the outcome y given the input x and the parameters w. This model is parametric, meaning that the form of the probability distribution is predefined, and the specific behavior of the model is determined by the parameters w.
The True Distribution
The true distribution q(y|x) is the actual distribution from which the data points are sampled. In practice, this distribution is not known and p(y|x,w) attempts to approximate it through the learning process.
The Parameters
The parameters w are the weights or coefficients within our model that determine its behavior. The goal of learning is to find the values of w that make p(y|x,w) as close as possible to the true distribution q(y|x).
The Data
The data D={xi,yi} consist of pairs of input values xi and their corresponding outcomes yi. These data points are used to train the model, adjusting the parameters w to fit the observed outcomes.
The Negative Log-Likelihood
The negative log-likelihood is a measure of how well our model p(y|x,w) fits the data D . It is defined as the negative sum of the logarithms of the model probabilities assigned to the true outcomes yi:
NLL(w)=−∑Ni=1logp(yi|xi,w),
where N is the number of data points in D. Minimizing the negative log-likelihood is equivalent to maximizing the likelihood of the data under the model, which is a central objective in the training of probabilistic models.
Prior over the Parameters
In a Bayesian framework, we incorporate our prior beliefs about the parameters w before observing the data through a prior distribution φ(w). This prior distribution encapsulates our assumptions about the values that w can take based on prior knowledge or intuition. For instance, a common choice is a Gaussian distribution, which encodes a preference for smaller (in magnitude) parameter values, promoting smoother model functions.
Posterior Distribution
Once we have observed the data D, we can update our belief about the parameters w using Bayes' theorem. The updated belief is captured by the posterior distribution, which combines the likelihood of the data given the parameters with our prior beliefs about the parameters. The posterior distribution is defined as:
P(w|D)=1Znexp(−NLL(w))Φ(w),
where NLL(w) is the negative log-likelihood of the parameters given the data, Φ(w) is the prior over the parameters, and Zn is the normalization constant, also known as the partition function. The partition function ensures that the posterior distribution is a valid probability distribution by integrating (or summing) over all possible values of w:
Zn=∫exp(−NLL(w))φ(w)dw.
The posterior distribution reflects how likely different parameter values are after taking into account the observed data and our prior beliefs.
Free Energy
This formulation looks like the Boltzmann distribution from statistical physics. This distribution also describes a probability: the probability that a system can be found in a particular microstate is e−E with E being the energy (here assuming, that the temperature is 1). This formalism can be expanded to macrostates, when changing the Energy to Free Energy, taking into account that microstates that have many realizations are more likely.
We can do a similar step with the Posterior of our model weights, by considering neighborhoods around local minima as macrostates. This opens up the question of what the correct formula for the Free Energy of these macrostates are, so we can make predictions about the behavior of learning machines.
First, let us consider the case of large datasets. In that case, we can replace the negative Log-Likelihood with n times its average:
NLL(w)=−N∑i=1logp(yi|xi,w)≈n∫log(p(y|x,w))q(y|x)q(x)dx=n(∫log(p(y|x,w)q(y|x))q(y|x)q(x)dx−∫log(q(y|x))q(y|x)q(x)dx)=n<KL(pw|q)−S(q)>q(x)
We see that we can compose the negative Log likelihood into the number of data points n, the expected Kullback–Leibler divergence of the model with the true distribution, and the expected entropy of the true distribution. As we see here, the expected entropy term does not contribute to the posterior of the parameters:
P(w|n)=1Znexp(−NLL(w))Φ(w)=exp((−NLL(w))Φ(w)∫exp(−NLL(w))Φ(w)dw=exp(−n⟨KL(pw∥q)−S(q)⟩q(x))Φ(w)∫exp(−n⟨KL(pw∥q)−S(q)⟩q(x))Φ(w)dw=exp(−n⟨KL(pw∥q)⟩q(x))Φ(w)∫exp(−n⟨KL(pw∥q)⟩q(x))Φ(w)dw
The macrostates, aka the regions in parameter space we are now interested in analyzing are the vicinities of local minima in the Loss landscape. Let us assume we decompose our parameter space into macrostates W=∪αWα.
P(w∈Wα)=∫w∈Wαp(w|n)dw=∫w∈Wα1Znexp(−L(w))Φ(w)dw∑α∫w∈Wα1Znexp(−L(w))Φ(w)dw=∫w∈Wαexp(−n⟨KL(pw∥q)⟩q(x))Φ(w)dw∑α∫w∈Wαexp(−n⟨KL(pw∥q)⟩q(x))Φ(w)dw=exp−F(Wα,n)∑αexp−F(Wα,n)F(Wα,n)=−log(∫w∈Wαexp(−n⟨KL(pw∥q)⟩q(x))Φ(w)dw)
We call this F the free energy of this area, as it serves the same function as the Free Energy in statistical physics. We have constructed Wα such that it only contains one minimum of the expected Kulberg Leibler divergence (from here on we abbreviate KL(pw∥q)⟩q(x) as K(ω)). These minima don't have to be points but can be different sets within Wα.
Since we assume, that K(ω) is smooth, it can locally be approximated by its Taylor series at any point. How the leading order terms of the Taylor series look at its minimum depends on the geometry of the minimum. Let's look at an example of how that might look like from the figure above. In a) we have a simple minimum point, so the Taylor series of K at this point might look like x2+y2, going up in each direction. In b) our minimum is a one-dimensional submanifold, and so the Taylor expansion of K will look like x0+y2, increasing in one dimension, but staying flat when going along another. The situation in c) looks almost the same, except in the center, where the line crosses itself. There the Taylor expansion will look like x2⋅y2, locally staying flat in either direction. Points like these, where the directions you can go in while staying on the set suddenly change, are called singularities.
Let us now calculate the Free Energy around a point ω∗, where K has a minimum. In general the leading order term of the K at a minimum looks like this:
K(ω∗+x)=Kmin+a∏ix2kii:=Kmin+a→x2→k
Here Kmin is the value of the KL-divergence at its minimum.
To get it into this form, we have to make a change of coordinates for x (for example when we are at a singularity where the different parts of the minimum come in from an angle). A process called blowup guarantees us, that we are always able to find such coordinates. Since we are at a minimum, we know that all leading order exponents are even.
Let's also approximate our prior over the weights Φ(w∗+x) on this point in the same basis as its leading order term:
Φ(ω)=b∏ixhii:=b→x→h
For large n, the Free Energy at any point far away from the minima gets exponentially suppressed. To see how much the minimal point ω∗ contributes to the integral we can integrate it in its vicinity. Here we choose to integrate from 0 to 1.
Why not from −1 to 1?[1]
F(ω∗,n)=−log(∫10exp(−n(Kmin+a→x2→k)(b→x→hd→x)
The first thing we can do is pull out the Kmin term as it does not depend on x.
F(ω∗,n)=nKmin−log(∫10exp(−na→x2→k)b→x→hd→x)
To solve the remaining integral, we first do a Laplace and then a Mellin transform, as this brings the integral into a form, that is easier to solve.
L(∫10exp(−na→x2→k)b→x→hd→x)=∫∞0(∫10exp(−na→x2→k)b→x→hd→x)e−ntdt=∫10δ(t−a→x2→k)b→x→hd→xM(L(∫10exp(−na→x2→k)b→x→hd→x))=∫∞0(∫10δ(t−a→x2→k)b→x→hd→x)tzdt=∫10(a→x2→k)zb→x→hd→x=azb∏i∫10x2zki+hii=azb1∏i(2zki+hi+1)
Next, we group the dimensions i along, such that within each group Iα, the value λα=hi+12ki is the same. Later in the derivation, it will turn out, that only the biggest λ will contribute for large n.
M(L(∫10exp(−na→x2→k)b→x→hd→x))=azb1∏α∏i∈Iα2ki(z+λi)
Now we go backward, and reverse the Mellin and the Laplace transform:
L(∫10exp(−na→x2→k)b→x→hd→x)=∫+i∞−i∞azb2πi1∏α∏i∈Iα2ki(z+λi)t−z−1dz=b2πit∫+i∞−i∞ezlog(at)∏α∏i∈Iα2ki(z+λi)dz
We can solve this integral with the Residue Theorem, closing the integral path over the negative half of the if a>t. Since all λα have to be positive, when we close the integral over the negative half, we enclose no residues and the integral comes out as zero. Otherwise each unique λi, which appears m times, being a residual of degree m:
L(∫10exp(−na→x2→k)b→x→hd→x)=b2πit∑α1mα−1(ddz)mα−1ezlog(at)∣∣∣z=−λα=⎧⎪⎨⎪⎩b2πit∑αlog(at)mα−1mα−1e−λαlog(at),if a>t,0,if a≤t.
When we now reverse the Laplace transform, we only have to integrate up to a.
∫10exp(−na→x2→k)b→x→hd→x=∫a0b2πi∑αa−λαlog(at)mα−1mα−1tλα−1e−ntdt∣∣∣τ=nat=b2πi∑αa−λαmα−1∫n0log(nτ)mα−1(aτn)λα−1e−τaandτ=b2πi∑αn−λαmα−1∫n0τλα−1e−τa(log(n)−log(τ))mα−1dτ=b2πi∑αn−λαmα−1∫n0τλα−1e−τamα−1∑j=0(mα−1j)log(n)mα−1−j(−log(τ)j)dτ=b2πi∑αn−λαmα−1mα−1∑j=0(mα−1j)log(n)mα−1−j∫n0τλα−1e−τa(−log(τ)j)dτ
This integral now is asymptotically going towards some number as n goes to infinity. So we can pick out the term that is the highest order in n as the one with the smallest λα and j=0.
∫10exp(−na→x2→k)b→x→hd→x∝n−λα∗log(n)mα∗−1+O(n−λα∗log(n)mα∗−2)
We can combine this into the Formula for Free Energy.
F(ω∗,n)=nKmin+λlog(n)−(m−1)log(log(n))
This expression tells us now, how much mass of the posterior distribution is centered around certain points in parameter space. The leading order term consists of the KL divergence between the training distribution and the distribution of our model. Intuitively, this corresponds to our training loss. In the second term the λ inversely scales with the multiplicity of the minimum. Intuitively it corresponds to symmetries in the algorithm that the Network implements. The third term scales with the number of parameters that have a minimum of the same multiplicity. Intuitively this gives a set of parameters an extra advantage when multiple parameter sets implement algorithms that independently lead to minimal loss.
I would have expected, that we integrate the whole vicinity:
F(ω∗,n)=−log(∫1−1exp(−n(Kmin+a→x2→k)(b→x→hd→x)
This, however, makes the derivation quite ugly:
L(∫1−1exp(−na→x2→k)b→x→hd→x)=∫∞0(∫1−1exp(−na→x2→k)b→x→hd→x)e−ntdt=∫1−1δ(t−a→x2→k)b→x→hd→xM(L(∫1−1exp(−na→x2→k)b→x→hd→x))=∫∞0(∫1−1δ(t−a→x2→k)b→x→hd→x)tzdt=∫1−1(a→x2→k)zb→x→hd→x=azb∏i∫1−1x2zki+hii=azb∏i(2zki+hi+1)∏i(12zki+hi−(−1)2zki+hi)
L(∫1−1exp(−na→x2→k)b→x→hd→x)=b2πi∮azt−z−1∏i(2zki+hi+1)∏i(1−(−1)2zki+hi)dz=b2πit∫+i∞−i∞ezlog(at)∏α∏i∈Iα2ki(z+λi)∏i(1−(−1)2ki(z+λi))dz=b2πit∑α1mα−1(ddz)mα−1ezlog(at)(1−(−1)2ki(z+λi))∣∣∣z=−λα=⎧⎨⎩b2πit∑α1mα−1e−λαlog(at)∑j(mα−1j)(log(at)mα−1−j(2πiki)j,if a>t,0,if a≤t.
We can proceed with the rest of the calculation with each summand j and just have a smaller mα. This does not change anything about the leading terms, that are relevant for the Free Energy formula.
I am still confused, about why the lecture went from 0 to 1, and would be grateful for an explanation in the comments.