Transfuturist comments on [Link] Self-Representation in Girard’s System U - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (13)
This is no such advancement for AI research. This only provides the possibility of typechecking your AI, which is neither necessary nor sufficient for self-optimizing AI programs.
I like how you made this comment, and then emailed me the link to the article, asking whether it actually represents something for self-modifying systems.
Now, as to whether this actually represents an advance... let's go read the LtU thread. My guess is that the answer is, "this is an advancement for self-modifying reasoning systems iff we can take System U as a logic in which some subset of programs prove things in the
Proptype, and thoseProp-typed programs always terminate."So, no.
But also:
I read the LtU thread as well, and began to wonder if Snively was stealing from me.
But no, you need a normalizing fragment to have a
Propsort. Or at least, you need more than this, particularly a solid notion of a probabilistic type theory (which linguists are at the beginning of forming) and Calude's work in Algorithmic Information Theory, to actually build self-verifying theorem-proving type-theories this way.This is no major result indeed. Neither necessary nor sufficient. But if you want safe self-optimizing AI you (and the AI) need to reason about the source. If you don't understand how the AI reasons about itself you can't control it. If you force the AI to reason in a way you can do too, e.g. by piggybacking on a sufficiently strong type system, then you at least have a chance to reason about it. There may be other ways to reason about self-modifying programs that don't rely on types but these are presumably either equivalent to such types - and thus the result is helpful in that area too - or more general - in which case proofs become likely more complicated (if feasible at all). So some equivalent to these types is needed for reasoning about safe self-modifying AI.