Just wanted to point out that using machine learning to produce formal proofs is not new; this work has been ongoing in both the machine learning and programming languages community. Most of the work has been done in the Coq or Isabelle/HOL proof languages, with a few things done in Lean. As far as I can tell, a lot of the other work is much farther along in terms of actually producing proofs of useful things; Proverbot9001 (proverbot9001.ucsd.edu), which uses Coq, is somewhere near proving 25% of the theorems in a verified compiler, a better example of the kinds of proofs that are actually hard and useful. TacTok is another interesting work in the area which improves on CoqGym (cited in the paper)'s tree based model (https://dl.acm.org/doi/10.1145/3428299).
Just wanted to point out that using machine learning to produce formal proofs is not new; this work has been ongoing in both the machine learning and programming languages community. Most of the work has been done in the Coq or Isabelle/HOL proof languages, with a few things done in Lean. As far as I can tell, a lot of the other work is much farther along in terms of actually producing proofs of useful things; Proverbot9001 (proverbot9001.ucsd.edu), which uses Coq, is somewhere near proving 25% of the theorems in a verified compiler, a better example of the kinds of proofs that are actually hard and useful. TacTok is another interesting work in the area which improves on CoqGym (cited in the paper)'s tree based model (https://dl.acm.org/doi/10.1145/3428299).
Disclaimer: I'm the author of Proverbot9001.