Yes but parsing theorems (at least at the syntactic level) should be no harder than parsing English, and we do lots of reasonably smart text mining, and even some reasonable natural language translation.
Math theorems are hard-going for many humans. Machines think differently - but may well find them challenging too. I'm not sure this area is particularly low-hanging.
Theorems are not generally presented in math journals in the way they were discovered, so I am not sure machine learning from journal articles would greatly help in discovery. The issue is really that going from question to answer is a different process from verifying an answer is correct, or guiding a reader through such a verification which is what a proof is.
A perhaps less lofty, but still incredibly useful, goal would be automating a process for simplifying proofs
Or alternatively convincing mathematicians to narrate their own mental process of discovery.
There's been some work on getting machines to try to make reasonable conjectures and definitions (see for example the work of Simon Colton) but I'm not aware of any work of actually trying to teach machines. I suspect this would be very difficult since most machine learning systems work best when the problems are in some vague sense fuzzy rather than formal.
Does anyone know of work that attempts to build a theorem prover by learning-from-examples? I'm imagining extracting a large corpus of theorems from back issues of mathematical journals, then applying unsupervised structure discovery techniques from machine learning to discover recurring patterns.
Perhaps a model of the "set of theorems that humans tend to produce" would be helpful in proving new theorems.
The unsupervised-structure-discovery bit does seem within the realm of current machine learning.
Any references to related work?