Current theorem provers don't have a "sense of direction".
From the description of Polya's Mathematics and Plausible Reasoning: Vol. II: Patterns of Plausible Inference:
This is a guide to the practical art of plausible reasoning, particularly in mathematics but also in every field of human activity. Using mathematics as the example par excellence, Professor Polya shows how even that most rigorous deductive discipline is heavily dependent on techniques of guessing, inductive reasoning, and reasoning by analogy. In solving a problem, the answer must be guessed at before a proof can even begin, and guesses are usually made from a knowledge of facts, experience, and hunches. The truly creative mathematician must be a good guesser first and a good prover afterward; many important theorems have been guessed but not proved until much later. In the same way, solutions to problems can be guessed, and a good guesser is much more likely to find a correct solution...
Current theorem provers search for a proof, and even contain some ad hoc heuristics for exploring their search trees. (For example, Paulson's book ML for the working programmer ends with a chapter on theorem proving and offers "We have a general heuristic: never apply universal:left or existential:right to a goal if a different rule can usefully be applied.) However, Polya's notion of being a good guesser has been formalised as Bayesian reasoning. If a theorem prover, facing a fork in its search tree, uses Bayes Theorem to pick the most likely branch first, it may find much deeper results.
I don't think there is currently much overlap between those working on theorem proving and those working on Bayesian statistics. There is perhaps even a clash of temperaments between those who seek absolute certainty and those who seek an edge in the messy process of muddling through. Nevertheless I foresee great possibilities of using Bayesian reasoning to guide the internal search of theorem provers, thus giving them a sense of direction.
Reasoning "X hasn't been progressing recently, therefore X is unlikely to progress in the near future." is good reasoning if that is really all that we know. But it is also weak reasoning which we should be quick to discard if we see a relevant change looming. The next ten or twenty years might see the incorporation of Bayesian reasoning as a guiding principle for tree search heuristics inside theorem provers and lead to large advances.
Bayesian probability theory tells you how to process evidence as well as possible. If you know what's evidence of a correct path, you can make it into an ad-hoc hueristic more easily than a part of a Bayesian update. Seems like the real insight required is to figure out what's evidence of a correct path.
In a recent comment thread I expressed skepticism as to whether there's been meaningful progress on general artificial intelligence.
I hedged because of my lack of subject matter knowledge, but thinking it over, I realized that I do have relevant subject matter knowledge, coming from my background in pure math.
In a blog post from April 2013, Fields Medalist Timothy Gowers wrote:
I don't know of any computer programs that have been able to prove theorems outside of the class "very routine and not requiring any ideas," without human assistance (and without being heavily specialized to an individual theorem). I think that if such projects existed, Gowers would be aware of them and would likely have commented on them within his post.
It's easy to give an algorithm that generates a proof of a mathematical theorem that's provable: choose a formal language with definitions and axioms, and for successive values of n, enumerate all sequences of mathematical deductions of length n, halting if the final line of a sequence is the statement of the the desired theorem. But the running time of this algorithm is exponential in the length of the proof, and the algorithm is infeasible to implement except for theorems with very short proofs.
It appears that the situation is not "there are computer programs that are able to prove mathematical theorems, just not as yet as efficiently as humans," but rather "computer programs are unable to prove any nontrivial theorems."
I'll highlight the Sylow theorems from group theory as examples of nontrivial theorems. Their statements are simple, and their proofs are not very long, but they're ingenious, and involve substantive ideas. If somebody was able to write a program that's able to find proofs of the Sylow theorems, I would consider that to be strong evidence that there's been meaningful progress on general artificial intelligence. In absence of such examples, I have a strong prior against there having been meaningful progress on general artificial intelligence.
I will update my views if I learn of the existence of such programs, or of programs that are capable of comparably impressive original research in domains outside of math.