AlephNeil comments on Does Solomonoff always win? - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (55)
Let's regard Omega's prior as being given by M(x) as shown here. Now let's divide our monotone UTM's programs into the following classes.
You can imagine Omega as a Bayesian reasoner trying to decide between the two hypotheses "the data was generated by a program in class 1" and "the data was generated by a program in class 2". Omega's prior will give each of these two hypotheses a non-zero probability.
To "cut to the chase", what happens is that the "extra damage" to the score caused by "class 2" falls off quickly enough, relative to the current posterior probability of "class 2", that the extra loss of score has to be finite.
I see! That's a very good intuitive explanation, thanks for writing it down.