AlephNeil comments on Does Solomonoff always win? - Less Wrong

11 Post author: cousin_it 23 February 2011 08:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread. Show more comments above.

Comment author: AlephNeil 24 February 2011 09:59:56PM 2 points [-]

Let's regard Omega's prior as being given by M(x) as shown here. Now let's divide our monotone UTM's programs into the following classes.

  1. Ones that just say "Print the following: ... "
  2. Every other program.

You can imagine Omega as a Bayesian reasoner trying to decide between the two hypotheses "the data was generated by a program in class 1" and "the data was generated by a program in class 2". Omega's prior will give each of these two hypotheses a non-zero probability.

To "cut to the chase", what happens is that the "extra damage" to the score caused by "class 2" falls off quickly enough, relative to the current posterior probability of "class 2", that the extra loss of score has to be finite.

Comment author: Vladimir_M 25 February 2011 01:10:19AM 0 points [-]

I see! That's a very good intuitive explanation, thanks for writing it down.