Girard's original texts concerning this is magnificent. Everyone needs to know more about Girard.
I had to google ELI5 :-)
First:
OK, now what the result means is that Brown and Palsberg achieve to create a system of types that allows to formally describe the objects (words, numbers) used in interpreters that can interpret themselves.
Why is that astonishing? Interpreters have been around for a long time, or? Including those that interpret themselves (which is called bootstrapping). Yes, but most of these use only very simple types. The power of the types used in a programming languages limits the way one can reason about the programs in the language. For the simple lamguages the interpreted programs are just a bunch of words. It is not possible to state that the bunch of words is a valid program. The System F mentioned above is quite powerful mathematically speaking. Fω even more. But both were not able to describe a program which interprets itself as valid program. This was achieved by System U.
System F is a type theory which is a formal system which is an alternative to set theory, so...
Yes, I quite realize, being something of a type-theory geek myself. What System F is not is a dependent type theory, with a proposition type and strong normalization, or even a propositional fragment in which proofs must normalize, or types dependent on values (which you need, in order to get proper propositional reasoning!).
A self-interpreting variant on Martin-Loef or Homotopy Type Theory would have been astounding! But that's not what this is.
Decidability has a price. So apparently you are dissatisfied by this result or consider it trivial. I think this buys a lot. Waht are you missing?
Decidability is not actually what we want here :-p. I could go into elaborate descriptions of what we want, but those would be theory-laden with my own ideas. Read my backlog on here to get an idea of the general direction I'm thinking in.
This result is not trivial, but it's useful for talking about Turing-complete programming languages, not about logic.
This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.
I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems.
Now, as to whether this actually represents an advance... let's go read the LtU thread. My guess is that the answer is, "this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the Prop
type, and those Prop
-typed programs always terminate."
However, System U is not strongly normalizing and is inconsistent as a logic.
So, no.
But also:
Anyway, the big deal here is:
"Our techniques could have applications... to growable typed languages..."
Curry-Howard tells us types are logical propositions. Mathematicians from Bayes to Laplace to Keynes to Cox to Jaynes have told us probability is an extension of logic or (as I prefer) logical truth/falsehood are limiting cases of probability. Alternatively, probability/logic are measures of information, in the algorithmic information theory sense. So a "growable typed language" seems it would have obvious benefits to machine learning and probabilistic programming generally. In other words, it's not inherently a big deal not to be strongly normalizing or have decidable type checking by default—that just means the system doesn't yet have enough information. But when the system acquires new information, it can synthesize new types and their relations, "growing" the typed language.
It's hard to overstate how significant this is.
I read the LtU thread as well, and began to wonder if Snively was stealing from me.
But no, you need a normalizing fragment to have a Prop
sort. Or at least, you need more than this, particularly a solid notion of a probabilistic type theory (which linguists are at the beginning of forming) and Calude's work in Algorithmic Information Theory, to actually build self-verifying theorem-proving type-theories this way.
This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don't understand how the AI reasons about itself you can't control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don't rely on types but these are presumably either equivalent to such types - and thus the result is helpful in that area too - or more general - in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.
Self-Representation in Girard’s System U, by Matt Brown and Jens Palsberg: