Recently automated reasoning system was developed to solve IMO problems. It is a very impressive and exciting advance, but it must be noted IMO problems are to be solved in two days.
Recently it was also proposed to make use of this advance to greatly automate formal verification of practical systems, including advanced AI systems. That's a very tall goal, so some kind of intermediate benchmark would be useful.
I suggest CompCert as a such benchmark. It is a formally verified compiler with manually written proof that compiles a large and practical subset of C programming language to (among other things) x86 assembly language, with optimizations such that performance is competitive with GCC -O1. Whether its formal specification corresponds to C programming language standard and Intel architecture manual written in English is a fascinating question, but more than a decade of testing since completion of its proof suggests it does. The initial proof took three years (from 2005 to 2008), and Xavier Leroy won ACM Software System Award for this work. So the task is an R&D level task.
The challenge is: automated reasoning system (like AlphaProof) will be given formal specification of C programming language standard, Intel architecure manual, and CompCert's compilation algorithm as input. The task is to prove it correct, by filling in all of manually written proofs automatically. 99% automation with manually written hints (say, 10 pages of proof sketch -- which happens to be length of CompCert overview paper -- written over 10 days) would be also interesting.
In many ways this is much easier task than what we want. Known to be correct compilation algorithm is given as input, and real to formal translation is already done. Compiler is also designed and written to be verified. Compared to automatically (or semi-automatically, with 99% automation) fixing bugs in GCC and extending formal specification to cover more and verifying not-designed-for-verification software like GCC correct, this is a toy task. But it is also a task that would take a researcher many months, not days. (Probably not years, since a lot is given as input this is a subset task of initial proof and we also have advanced since 2008.)
Recently automated reasoning system was developed to solve IMO problems. It is a very impressive and exciting advance, but it must be noted IMO problems are to be solved in two days.
Recently it was also proposed to make use of this advance to greatly automate formal verification of practical systems, including advanced AI systems. That's a very tall goal, so some kind of intermediate benchmark would be useful.
I suggest CompCert as a such benchmark. It is a formally verified compiler with manually written proof that compiles a large and practical subset of C programming language to (among other things) x86 assembly language, with optimizations such that performance is competitive with GCC -O1. Whether its formal specification corresponds to C programming language standard and Intel architecture manual written in English is a fascinating question, but more than a decade of testing since completion of its proof suggests it does. The initial proof took three years (from 2005 to 2008), and Xavier Leroy won ACM Software System Award for this work. So the task is an R&D level task.
The challenge is: automated reasoning system (like AlphaProof) will be given formal specification of C programming language standard, Intel architecure manual, and CompCert's compilation algorithm as input. The task is to prove it correct, by filling in all of manually written proofs automatically. 99% automation with manually written hints (say, 10 pages of proof sketch -- which happens to be length of CompCert overview paper -- written over 10 days) would be also interesting.
In many ways this is much easier task than what we want. Known to be correct compilation algorithm is given as input, and real to formal translation is already done. Compiler is also designed and written to be verified. Compared to automatically (or semi-automatically, with 99% automation) fixing bugs in GCC and extending formal specification to cover more and verifying not-designed-for-verification software like GCC correct, this is a toy task. But it is also a task that would take a researcher many months, not days. (Probably not years, since a lot is given as input this is a subset task of initial proof and we also have advanced since 2008.)