Timwi comments on Bayesianism in the face of unknowns - Less Wrong

1 Post author: rstarkov 12 March 2011 08:54PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Manfred 12 March 2011 11:30:45PM *  1 point [-]

Since the only thing that matters (to your expected value) is the overall probability of heads v. tails, the problem is a simple one of parameter estimation. You start with a distribution over p (determined by this thing called the principle of maximum entropy, probably just uniform in this case) and then update it with new evidence (for example, P(p=0.01) drops almost to 0 as soon as you see a head). After 100 heads you get a very spiky function at 1. And yes this is optimal.

For any finite amount of data you won't perfectly break even using a bayesian method, but it's better than all the alternatives, as long as you don't leave out some data.

Comment author: Timwi 12 March 2011 11:34:11PM 0 points [-]

That sounds pretty much the same as what I said above.

Comment author: Manfred 13 March 2011 03:37:38AM 0 points [-]

Yup. Except maybe with a little more confidence that Bayes' rule applies here in the specific way of altering the probability distribution over p at each point.