This is a linkpost for https://youtu.be/O6eqvb5Sw3o

Talk by Tess E. Smidt (MIT). Abstract:

Atomic systems (molecules, crystals, proteins, etc.) are naturally represented by a set of coordinates in 3D space labeled by atom type. This is a challenging representation to use for machine learning because the coordinates are sensitive to 3D rotations, translations, and inversions (the symmetries of 3D Euclidean space). In this talk I’ll give an overview of Euclidean invariance and equivariance in machine learning for atomic systems. Then, I’ll share some recent applications of these methods on a variety of atomistic modeling tasks (ab initio molecular dynamics, prediction of crystal properties, and scaling of electron density predictions). Finally, I’ll explore open questions in expressivity, data-efficiency, and trainability of methods leveraging invariance and equivariance.

I anticipate this sort of approach could be a big deal for macroscopic modeling as well. In particular, if multiscale representation is to become commonplace, this sort of physically based model would become the default. In my view, language models are a temporary research artifact of their time and lack true causal understanding. key components of the system that can actually work well will include this sort of equivariance and fundamental 3d knowledge.

New Comment