You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

ThisSpaceAvailable comments on Approximating Solomonoff Induction - Less Wrong Discussion

6 Post author: Houshalter 29 May 2015 12:23PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (45)

You are viewing a single comment's thread. Show more comments above.

Comment author: ThisSpaceAvailable 02 June 2015 04:08:31AM 1 point [-]

An example of a sense would be to define some quantification of how good an algorithm is, and then show that a particular algorithm has a large value for that quantity, compared to SI. In order to rigorously state that X approaches Y "in the limit", you have to have some index n, and some metric M, such that |M(Xn)-M(Yn)| -> 0. Otherwise, you're simply making a subjective statement that you find X to be "good". So, for instance, if you can show that the loss in utility in using your algorithm rather than SI goes to zero as the size of the dataset goes to infinity, that would be an objective sense in which your algorithm approximates SI.

Comment author: jacob_cannell 15 June 2015 07:57:04AM 1 point [-]

It should be obvious that SGD over an appropriately general model - with appropriate random inits and continuous restarts with keep the max score solution found - will eventually converge on the global optimum, and will do so in expected time similar or better to any naive brute force search such as SI.

In particular SGD is good at exploiting any local smoothness in solution space.