If you're interested in approximating Hessian-vector products efficiently for frontier-size models, this recent Anthropic paper describes a mechanism for doing so.
You should train both a feedforward network and a CNN on image classification on imagenet, to see if we see that the hessian of the CNN is more similar to the identity after training rather than the feedforward because of the image understanding priors.
Given that this method returns a numeric matrix, then it must be an Hessian evaluated at a point or the average Hessian of many points. Is the result the Hessian averaged over all training data? Is this average useful rather than just cancelling out high and low Hessian values
The method described does not explicitly compute the full Hessian matrix. Instead, it derives the top eigenvalues and eigenvectors of the Hessian. The implementation accumulates a large batch from a dataloader by concatenating n_batches
of the typical batch size. This is an approximation to estimate the genuine loss/gradient on the complete dataset more closely. If you have a large and high-variance dataset, averaging gradients over multiple batches might be better. This is because the loss calculated from a single, accumulated batch may not be adequately representative of the entire dataset's true loss.
The idea/description of this method is fully taken from John Wentworth's Applied Linear Algebra lecture series, specifically Lecture 2.
Training deep neural networks involves navigating high-dimensional loss landscapes. Understanding the curvature of these landscapes via the Hessian of the loss function can provide insights into the optimization dynamics. However, computing the full Hessian can be prohibitively expensive. In this post, I describe a method (described by John Wentworth in his lecture series) for efficiently computing the top eigenvalues and eigenvectors of the loss Hessian using PyTorch's autograd and SciPy's sparse linear algebra utilities.
Hessian-vector product
The core idea hinges upon the Hessian-vector product (HVP). Given a vector v, the HVP is defined as H⋅v , where H is the Hessian matrix. This product can be computed efficiently using automatic differentiation without forming the full Hessian. The process can be outlined as:
Lanczos Iteration and eigsh
eigsh from scipy.sparse.linalg implements the Lanczos iteration, which finds the top eigenvalues and eigenvectors of a symmetric matrix. It requires matrix-vector multiplication as the main computation, making it ideal for large matrices where full matrix factorizations are infeasible.
Using LinearOperator
To interface with eigsh, we need a mechanism to represent our Hessian as a linear operator that supports matrix-vector multiplication. SciPy's LinearOperator serves this purpose, allowing us to define a matrix implicitly by its action on vectors without forming the matrix explicitly.
Implementation
Given a PyTorch model, loss function, and training data, the approach is to:
Appendix: Python code
You can find this code as a GitHub gist here also.