In a recent comment thread I expressed skepticism as to whether there's been meaningful progress on general artificial intelligence.
I hedged because of my lack of subject matter knowledge, but thinking it over, I realized that I do have relevant subject matter knowledge, coming from my background in pure math.
In a blog post from April 2013, Fields Medalist Timothy Gowers wrote:
Over the last three years, I have been collaborating with Mohan Ganesalingam, a computer scientist, linguist and mathematician (and amazingly good at all three) on a project to write programs that can solve mathematical problems. We have recently produced our first program. It is rather rudimentary: the only problems it can solve are ones that mathematicians would describe as very routine and not requiring any ideas, and even within that class of problems there are considerable restrictions on what it can do; we plan to write more sophisticated programs in the future.
I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem). I think that if such projects existed, Gowers would be aware of them and would likely have commented on them within his post.
It's easy to give an algorithm that generates a proof of a mathematical theorem that's provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.
It appears that the situation is not "there are computer programs that are able to prove mathematical theorems, just not as yet as efficiently as humans," but rather "computer programs are unable to prove any nontrivial theorems."
I'll highlight the Sylow theorems from group theory as examples of nontrivial theorems. Their statements are simple, and their proofs are not very long, but they're ingenious, and involve substantive ideas. If somebody was able to write a program that's able to find proofs of the Sylow theorems, I would consider that to be strong evidence that there's been meaningful progress on general artificial intelligence. In absence of such examples, I have a strong prior against there having been meaningful progress on general artificial intelligence.
I will update my views if I learn of the existence of such programs, or of programs that are capable of comparably impressive original research in domains outside of math.
Ha, I've seen that quote before, good point! (and likewise in the linked comment) I suppose one reason to think that mathematical reasoning is close to AGI is that it seems similar to programming. And if an AI can program, that seems significant.
Maybe a case could be made that the key difficulty in programming will turn out to be in formulating what program to write. I'm not sure what the analogue is in mathematics. Generally it's pretty easy to formally state a theorem to prove, even if you have no idea how to prove it, right?
If so, that might lend support for the argument that automated general mathematical reasoning is still a ways off from AGI.
It's pretty easy for a human of significantly above average intelligence. That doesn't imply easy for an average human or an AI.