eternal_neophyte comments on Approximating Solomonoff Induction - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (45)
This is a subject that I'm fairly well-read on. Look up Eray Ozkural's work on this.
You have correctly identified that the problem with Solomonoff induction (at least as it's usually formulated) is that representations of programs in terms of binary strings (the Turing machine formalism) isn't very ideal for approximating Solomonoff induction. Circuits and/or neural networks are somewhat better, but still extremely limited because as you try to do more and more complicated computations, your neural networks become exponentially more complicated in structure.
Ideally, you want a representation system that is adaptive - it can use solutions found in previous problem-solving attempts, apply abstraction, and work in a 'higher' representation space. That is, it develops its own 'language' for representing programs at a high level where the search space is correspondingly smaller. This is the basis of the OOPS system (optimally ordered problem solver).
But we can still do better. Instead of using the Turing representation, using stochastic context-sensitive grammars allows a far more compact and easily navigable search space. This is the basis of Ozkural's work, such as http://link.springer.com/chapter/10.1007%2F978-3-319-09274-4_12#page-1
Appreciate your linking to this guy's work...so, err, is any of it not pay-walled?
Most of it should be publicly available in some form with a little bit of searching; here's the free version of that article: http://agi-conf.org/2014/wp-content/uploads/2014/08/ozkural-application-agi14.pdf
Sincere thanks.