Singular Learning Theory
Singular Learning Theory (SLT) is a novel mathematical framework that expands and improves upon traditional Statistical Learning theory using techniques from algebraic geometry, bayesian statistics, and statistical physics. It has great promise for the mathematical foundations of modern machine learning.
From the meta-uni seminar on SLT:
The canonical references are Watanabe’s two textbooks:
- The gray book: S. Watanabe “Algebraic geometry and statistical learning theory” 2009.
- The green book: S. Watanabe “Mathematical theory of Bayesian statistics” 2018.
Some other introductory references:
- Matt Farrugia-Roberts’ MSc thesis, October 2022, Structural Degeneracy in Neural Networks.
- Spencer Wong’s MSc thesis, May 2022, From Analytic to Algebraic: The Algebraic Geometry of Two Layer Neural Networks.
- Liam Carroll’s MSc thesis, October 2021, Phase transitions in neural networks.
- Tom Waring’s MSc thesis, October 2021, Geometric Perspectives on Program Synthesis and Semantics.
- S. Wei, D. Murfet, M. Gong, H. Li , J. Gell-Redman, T. Quella “Deep learning is singular, and that’s good” 2022.
- Edmund Lau’s blog Probably Singular.
- Shaowei Lin’s PhD thesis, 2011, Algebraic Methods for Evaluating Integrals in Bayesian Statistics.
- Jesse Hoogland’s blog posts: general intro to SLT, and effects of singularities on dynamics.
- Announcement of the devInterp agenda.