Timwi comments on Bayesianism in the face of unknowns - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (35)
Since the only thing that matters (to your expected value) is the overall probability of heads v. tails, the problem is a simple one of parameter estimation. You start with a distribution over p (determined by this thing called the principle of maximum entropy, probably just uniform in this case) and then update it with new evidence (for example, P(p=0.01) drops almost to 0 as soon as you see a head). After 100 heads you get a very spiky function at 1. And yes this is optimal.
For any finite amount of data you won't perfectly break even using a bayesian method, but it's better than all the alternatives, as long as you don't leave out some data.
That sounds pretty much the same as what I said above.
Yup. Except maybe with a little more confidence that Bayes' rule applies here in the specific way of altering the probability distribution over p at each point.