Honestly, I think people are overestimating this. Some quick thoughts:
They already claimed once to be at a 1200 Elo level in competitive programming on the Codeforces, but in real competition settings, it only reached a level of, as I remember correctly, around ~500 as people found the corresponding account they used for testing. Also, their solution to one very hard problem was exactly the same as one from a human competitor, and they likely had this exact problem already in their training data, so their training data was likely contaminated.
I'm not...
The methods used to solve these "new" problems are, as I've already stated, highly amenable to brute-force approaches. It's more of a computation problem [BitterLesson](http://www.incompleteideas.net/IncIdeas/BitterLesson.html). I'm again not surprised these kinds of problems got solved.
While these problems may be new, they employ very similar methods to those definitely used in previous competitive programming and Math Olympiad problems. I don't think the author has really looked into the specifics of how AlphaGeometry and AlphaProof have come up with the... (read more)