Try wikipedia on formal methods..
It is common to prove hardware correctness these days.
Wikipedia mentions two verified operating systems, but I don't know what has been verified about them, probably that certain kinds of privilege escalation are not possible. A problem is that one would like to run legacy machine code or C that is not specified, thus cannot be verified. Verification of the memory protection would show that such processes cannot interfere with each other. One can also deal with this by moving to languages without pointer arithmetic. Java makes a lot of security guarantees, but I don't know if anyone has verified them for a runtime. A similar project that has been verified is Microsoft's Singularity. The boast is that it provably does not need run-time memory protection.
Of course, it is very difficult to model user interactions and prevent processes from tricking the user into letting them interfere with each other.
Don't deduce anything about the difficulty from the lack of deployment. The incentives for security are pretty low. The incentives for backwards compatibility are pretty high.
I'm not sure what exactly you're asking. Would you like to read an in-depth survey of the present state of the art in automated theorem proving and automatic verification? Or are you interested in the fundamental question of why these problems are intractable in the great majority of really interesting cases? The former would require expert input, but the latter mostly boils down to some basic observations from the theory of computation, the foundations of math, and complexity theory.
Any such information would be appreciated, though in this post I'm primarily asking "what would be the benefits of having sophisticated tools for automatic proving and verification?"
There have been (and continue to be) many approaches to this, in fact the term Good old fashioned AI basically refers to this. It is very interesting that significant progress has not been made with this approach. This has led to greater use of statistical techniques, such as support vector machines or Bayesian networks. A basic difficulty with any approach to AI is that many techniques merely change the problem of learning, generalisation and problem solving into another form rather than solving it. E.g. Formal methods for software development move the problem from that of programming to that of formal specification. It's not clear that creating the specifications is any less error prone than the creation of the original program.
On the other hand there are enormous benefits to turning any knowledge into a form that a computer can manipulate. There has been some progress in doing this for mathematics. It is possibly one of the most depressing consequences of Godel's theorem that there has not been more work in this area. Writing proofs informally on paper doesn't get around Godel's theorem, it just makes the work harder to validate.
It's also a good challenge for those who feel that AI is just a matter of correctly applying Bayes theorem, as it is not clear how it could be applied to solve this problem.
There are definitely cases where there is little hope of proving "100% intended performance". For example, RSA only works as intended if factoring is hard. Most computer scientists strongly believe this is true, but this is not likely to be proven any time soon.
Automated theorem proving *sounds like* a natural extension of many useful trends and a solution to many current problems. To me, it seems obvious that there will be a need for the formalization of the mathematics (up to and beyond the boundary of mathematics with real-world applications) as well as the routine checking of software for 100% intended performance. Secure networks and software in particular could be an important safeguard against AI.
Yet I haven't heard much of it; the implementation difficulties must be considerable given that there are substantial and predictable benefits to the widespread use of automated theorem proving. Anyone with experience in the field?