I've been feeling burned on Overcoming Bias lately, meaning that I take too long to write my posts, which decreases the amount of recovery time, making me feel more burned, etc.
So I'm taking at most a one-week break. I'll post small units of rationality quotes each day, so as to not quite abandon you. I may even post some actual writing, if I feel spontaneous, but definitely not for the next two days; I have to enforce this break upon myself.
When I get back, my schedule calls for me to finish up the Anthropomorphism sequence, and then talk about Marcus Hutter's AIXI, which I think is the last brain-malfunction-causing subject I need to discuss. My posts should then hopefully go back to being shorter and easier.
Hey, at least I got through over a solid year of posts without taking a vacation.
Tim:
What is the rationale for considering some machines and not others?
Because we want to measure the information content of the string, not some crazy complex reference machine. That's why a tiny reference machine is used. In terms of inductive inference, when you say that the bound is infinitely large, what you're saying is that you don't believe in Occam's razor. In which case the whole Bayesian system can get weird. For example, if you have an arbitrarily strong prior belief that most of the world is full of purple chickens from Andromeda galaxy, well, Bayes' rule is not going to help you much. What you want is an uninformative prior distribution, or equivalently over computable distributions, a very simple reference machine.
Thanks to the rapid convergence of the posterior from a universal prior, that 2^100 factor is small for any moderate amount of data. Just look at the bound equation.
These things are not glossed over. Read the mathematical literature on the subject, it's all there.