Shane_Legg comments on The Design Space of Minds-In-General - Less Wrong

19 Post author: Eliezer_Yudkowsky 25 June 2008 06:37AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (82)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Shane_Legg 25 June 2008 04:00:35PM 1 point [-]

@ Silas:

I assume you mean "doesn't run" (python isn't normally a compiled language).

Regarding approximations of Solomonoff induction: it depends how broadly you want to interpret this statement. If we use a computable prior rather than the Solomonoff mixture, we recover normal Bayesian inference. If we define our prior to be uniform, for example by assuming that all models have the same complexity, then the result is maximum a posteriori (MAP) estimation, which in turn is related to maximum likelihood (ML) estimation. Relations can also be established to Minimum Message Length (MML), Minimum Description Length (MDL), and Maximum entropy (ME) based prediction (see Chapter 5 of Kolmogorov complexity and its applications by Li and Vitanyi, 1997).

In short, much of statistics and machine learning can be view as being computable approximations of Solomonoff induction.