Vladimir_M comments on Does Solomonoff always win? - Less Wrong

11 Post author: cousin_it 23 February 2011 08:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread. Show more comments above.

Comment author: Vladimir_M 24 February 2011 06:40:06AM *  2 points [-]

That seems to be equivalent to saying that Omega's probability estimates will approach fifty-fifty fast enough. So if I understand correctly, it's a provable property of Solomonoff induction that faced with a "maximally hostile" bit-string, its probability estimates will converge to fifty-fifty fast enough that it can't blunder worse than a random guesser by more than a constant term in the log(P) game? If true, I find that quite fascinating.

Comment author: AlephNeil 24 February 2011 09:59:56PM 2 points [-]

Let's regard Omega's prior as being given by M(x) as shown here. Now let's divide our monotone UTM's programs into the following classes.

  1. Ones that just say "Print the following: ... "
  2. Every other program.

You can imagine Omega as a Bayesian reasoner trying to decide between the two hypotheses "the data was generated by a program in class 1" and "the data was generated by a program in class 2". Omega's prior will give each of these two hypotheses a non-zero probability.

To "cut to the chase", what happens is that the "extra damage" to the score caused by "class 2" falls off quickly enough, relative to the current posterior probability of "class 2", that the extra loss of score has to be finite.

Comment author: Vladimir_M 25 February 2011 01:10:19AM 0 points [-]

I see! That's a very good intuitive explanation, thanks for writing it down.

Comment author: cousin_it 24 February 2011 09:23:04AM *  2 points [-]

Yes, it's a provable property of Solomonoff induction. It was a surprise to me too when I worked it out some days ago (using Shane Legg's paper linked from the post), but users AlephNeil and paulfchristiano didn't seem surprised, and Eliezer always considered it obvious I guess.

Comment author: Vladimir_M 24 February 2011 08:58:30PM *  0 points [-]

But then it seems to me that Game 3 is unfair, in that it artificially amplifies infinitesimal errors. Faced with a hostile input, a Solomonoff predictor will correctly converge to saying "I don't know," i.e. producing p(0) ~ p(1) ~ 1/2. However, a game that forces it to nevertheless make binary predictions in this situation will force an answer based on the infinitesimal differences between the calculated probabilities and 1/2, and moreover, the correct answers are contrived so that these infinitesimal differences always point in the wrong direction.

If you instead let the Solomonoff predictor just honestly say "I don't know," as in the first two games, the problem disappears.

Comment author: JGWeissman 24 February 2011 09:07:47PM 1 point [-]

If you instead let the Solomonoff predictor just honestly say "I don't know," as in the first two games, the problem disappears.

True, though sometimes in real life, you can't just say "I don't know", you have to choose an action. Game 3 is unfair, but real life can be unfair too.