You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Transfuturist comments on [Link] Self-Representation in Girard’s System U - Less Wrong Discussion

2 Post author: Gunnar_Zarncke 18 June 2015 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (13)

You are viewing a single comment's thread.

Comment author: Transfuturist 25 June 2015 09:36:18PM *  -1 points [-]

This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.

Comment author: [deleted] 25 June 2015 10:55:51PM 1 point [-]

I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems.

Now, as to whether this actually represents an advance... let's go read the LtU thread. My guess is that the answer is, "this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the Prop type, and those Prop-typed programs always terminate."

However, System U is not strongly normalizing and is inconsistent as a logic.

So, no.

Comment author: Gunnar_Zarncke 29 June 2015 09:33:55AM 0 points [-]

But also:

Anyway, the big deal here is:

"Our techniques could have applications... to growable typed languages..."

Curry-Howard tells us types are logical propositions. Mathematicians from Bayes to Laplace to Keynes to Cox to Jaynes have told us probability is an extension of logic or (as I prefer) logical truth/falsehood are limiting cases of probability. Alternatively, probability/logic are measures of information, in the algorithmic information theory sense. So a "growable typed language" seems it would have obvious benefits to machine learning and probabilistic programming generally. In other words, it's not inherently a big deal not to be strongly normalizing or have decidable type checking by default—that just means the system doesn't yet have enough information. But when the system acquires new information, it can synthesize new types and their relations, "growing" the typed language.

It's hard to overstate how significant this is.

Comment author: [deleted] 29 June 2015 11:10:03PM 0 points [-]

I read the LtU thread as well, and began to wonder if Snively was stealing from me.

But no, you need a normalizing fragment to have a Prop sort. Or at least, you need more than this, particularly a solid notion of a probabilistic type theory (which linguists are at the beginning of forming) and Calude's work in Algorithmic Information Theory, to actually build self-verifying theorem-proving type-theories this way.

Comment author: Gunnar_Zarncke 25 June 2015 10:16:00PM 0 points [-]

This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don't understand how the AI reasons about itself you can't control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don't rely on types but these are presumably either equivalent to such types - and thus the result is helpful in that area too - or more general - in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.