That beta distribution will have more built in uncertainty if based on a sample size of 100 rather than a sample size of 1.000.000, but that's the only difference (right?). In the Bayesian update they still have the same weight. Isn't this unfair to the large sample size likelihood distribution? Shouldn't it have more weight in the Bayesian update?
Would a solution be to make a Bayesian update for each individual observation of faulty/not-faulty product from machine x? Curiously this would seem to move the problem from a mathematical analysis to a brute force computational task (unless all that Bayesian updating can be neatly modelled)
(Note: I use the American radix point, except in quotes, where I preserve loldrup's.)
That beta distribution will have more built in uncertainty if based on a sample size of 100 rather than a sample size of 1.000.000, but that's the only difference (right?).
Remember that the posterior is the combination of the prior and the likelihood, weighted by the precision of each. The beta(1,1) prior (the famous 'uniform' prior) gives us the estimate that 50% of the material a machine outputs is going to be defective. If the true rate is 5%, and we somehow get the...
In the introductory example in the Wikipedia article on the Bayesian theorem, they start out with a prior distribution for P(machine_ID | faulty_product)* and then updates this using a likelihood distribution P(faulty_product | machine_ID) to acquire a posterior distribution for P(machine_ID | faulty_product).
How did they come up with the likelihood distribution? Maybe they sampled 100 products from each machine and for each sample counted the number of faulty products. Maybe they sampled 1.000.000 products from each machine...
We don't know which sample size is used: the likelihood distribution doesn't reveal this. Thus this matter doesn't influence the weight of the Bayesian update. But shouldn't it do so? Uncertain likelihood distributions should have a small influence and vice versa. How do I make the bayesian update reflect this?
I read the links provided by somervta in the 'Error margins' discussion from yesterday, but I'm not skillful enough to adapt them to this example.
* technically they just make the prior distribution a clone of the distribution P(machine_ID) but I like to keep the identity across the Bayesian update so I gave the prior and the posterior distribution the same form: P(machine_ID | faulty_product).