cousin_it comments on The prior of a hypothesis does not depend on its complexity - Less Wrong

26 Post author: cousin_it 26 August 2010 01:20PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (59)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 27 August 2010 09:04:53AM *  1 point [-]

Thanks, this angle looks interesting, and new to me. Could you explain some more how it's supposed to work? For example, if the input is an infinite stream of numbers and my hypothesis is something like "next number will be 42" or "prime-numbered members will be prime", should I pad it with infinite additional entropy?

Comment author: taw 27 August 2010 11:28:57PM 0 points [-]

Normally streams are assumed to be finitely long and have symbols from finite set to keep math neat.

Extension to infinitely long streams with finite complexity are conceptually fairly straightforward using trick very similar to non-normalizable Bayesian prior (just like uniform prior over real numbers is well defined even if it sums to infinity so it's not really probability, or entropy of a molecule when atom locations are unbound continuous variables), but math can get considerably messier.

Finite or not, streams must be countable.

So just select some ordering and for stream-tester T and number N world generator is "i=0; for(w in streams) { if(T(w)) i += 1; if(i == N) { print(w); break; } }". There are multiple ways to choose iteration order and encoding for N (simplest would be fixed bit size number, but you need more complicated encoding for infinite number of possibilities obviously), but they're just a constant away from each other, so you can ignore this detail.