Today's post, Inductive Bias was originally published on April 8, 2007. A summary (from the LW wiki):
Inductive bias is a systematic direction in belief revisions. The same observations could be evidence for or against a belief, depending on your prior. Inductive biases are more or less correct depending on how well they correspond with reality, so "bias" might not be the best description.
Discuss the post here (rather than in the comments of the original post).
This post is part of a series rerunning Eliezer Yudkowsky's old posts so those interested can (re-)read and discuss them. The previous post was Debiasing as Non-Self-Destruction, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it, posting the next day's sequence reruns post, summarizing forthcoming articles on the wiki, or creating exercises. Go here for more details, or to discuss the Sequence Reruns.
I saw this too, I think he means 'maximum entropy over the outcomes (ball draws)' rather than 'maximum entropy over the parameters in your model'. The intuition is that if you posit no structure to your observations, making an observation doesn't tell you anything about your future observations. Though, that interpretation doesn't quite fit since he specified he knew they were drawn with a known probability.
One of the things EY made me realize is that any modeling is part of the prior. Specifically a model is a prior about how observations are related. For example, part of your model might be 'balls are independently with a constant but unknown probability'. If you had a maximum entropy prior over the draws, you would say something more like 'ball draws are completely unrelated and determined by completely separate processes'.
This still confuses me. 'Ball draws are completely unrelated and determined by completely separate processes' still contains information about how the balls were generated. It seems like if you observed a string of 10 red balls, then your hypothesis would lose probability mass to the hypothesis 'ball draws are red with p > 0.99.'
It seems like the problem only happens if you include an unjustified assumption in your 'prior', then refuse to consider the possibility that you were wrong.
My prior information is that every time I have found something Eliezer said confusing, it has eventually turned out that I was mistaken. I expect this to remain true, but there's a slight possibility that I am wrong.