Eliezer_Yudkowsky comments on Complexity and Intelligence - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (75)
I don't quite see this. With Solomonoff induction, as with a computable human mathematician, the probability of the next symbol being 0 will approach 0. I don't see why a Solomonoff inductor using mixtures (that is, evaluating computable probability distributions rather than computable sequences) will ever assign a probability arbitrarily close to 0 of seeing another 0, ever.
Ask the human mathematician, over and over, what their probability of the next symbol being 0 is. They're computable, so this distribution is in the mixture. What other distribution is it necessarily dominated by, in the Solomonoff mixture?
Or are we allowing the human mathematician to have an inconsistent probability distribution where he says "You'll see another 0 eventually, I'm sure of it, but I'm also pretty sure that no matter how large a number of 1s I pick, it won't be high enough." If so, to be fair, we should factor out the symbol for "see another 0 eventually" and just ask the Solomonoff inductor about that separately via some input encoding, the same way we ask the human about it separately.