Because I have been learning about Type Theory, I have become much more aware of and interested in Functional Programming.
If you are unfamiliar with functional programming, Real World Haskell describes functional programming like this:
In Haskell [and other functional languages], we de-emphasise code that modifies data. Instead, we focus on functions that take immutable values as input and produce new values as output. Given the same inputs, these functions always return the same results. This is a core idea behind functional programming.
Along with not modifying data, our Haskell functions usually don't talk to the external world; we call these functions pure. We make a strong distinction between pure code and the parts of our programs that read or write files, communicate over network connections, or make robot arms move. This makes it easier to organize, reason about, and test our programs.
Because of this functional languages have a number of interesting differences with traditional programming. In functional programming:
- Programming is lot more like math. Programs are often elegant and terse.
- It is much easier to reason about programs, including proving things about them (termination, lack of errors etc.). This means compilers have much more room to automatically optimize a program, automatically parallelizing code, merging repeated operations etc.
- Static typing helps (and requires) you find and correct a large fraction of trivial bugs without running the program.
- Pure code means doing things with side effects (like I/O) requires significantly more thought to start to understand, but also makes side effects more explicit.
- Program evaluation is defined much more directly on the syntax of the language.
You make the situation with optimizing compilers sound really optimistic! Unfortunately, it seems to me that things don't work anywhere so well in practice. Yes, the practical handling of fancy language features has come a long way from naive straightforward implementations, but I'd say you exaggerate how good it is.
For example, I haven't followed closely the work in bounds checking elimination, but I know that a lot of papers are still being published about it, indicating that there are still large enough overheads to make the problem interesting. (Which is not surprising, considering that the problem is after all undecidable in general. Also, as far as I know, bounds checks are normally not added by C compilers, and there are depressing limits to what theorem provers can figure out about the usual C where you pass pointers around liberally.)
It's similar with garbage collection, dynamic type checks, and other fancy features. Their overheads can certainly be reduced greatly by smart compilers and efficient run-time operations, sometimes to the point where there is no difference from C, but definitely not always, and often not reliably and predictably.
(Fortran, by the way, has traditionally had the advantage of being highly amenable to automatic optimization, including automatic parallelization, especially when used for typical array-based numerical computations. This has in turn led to a lot of fruitful effort put into optimizing and parallelizing numerical Fortran, leading to its unprecedented performance and acceptance in these areas, and its lead has been eroded only in recent years. You can't possibly say that this is just due to a single popular library.)