In the introductory example in the Wikipedia article on the Bayesian theorem, they start out with a prior distribution for P(machine_ID | faulty_product)* and then updates this using a likelihood distribution P(faulty_product | machine_ID) to acquire a posterior distribution for P(machine_ID | faulty_product).
How did they come up with the likelihood distribution? Maybe they sampled 100 products from each machine and for each sample counted the number of faulty products. Maybe they sampled 1.000.000 products from each machine...
We don't know which sample size is used: the likelihood distribution doesn't reveal this. Thus this matter doesn't influence the weight of the Bayesian update. But shouldn't it do so? Uncertain likelihood distributions should have a small influence and vice versa. How do I make the bayesian update reflect this?
I read the links provided by somervta in the 'Error margins' discussion from yesterday, but I'm not skillful enough to adapt them to this example.
* technically they just make the prior distribution a clone of the distribution P(machine_ID) but I like to keep the identity across the Bayesian update so I gave the prior and the posterior distribution the same form: P(machine_ID | faulty_product).
Hayrff guvf vf gevpxvre guna vg frrzf, whfg gur svefg zbzrag bs rnpu qvfgevohgvba fubhyq qb. (Sbe guvf ernfba V qvfnterr gung gur Jvxv negvpyr vzcyvpvgyl nffhzrf vasvavgr fnzcyr fvmr. Gur pbaqvgvbany cebonovyvgvrf hfrq va gur pnyphyngvba ner gur svefg zbzragf (= pbafgnagf) bs gur erfcrpgvir cnenzrgre qvfgevohgvbaf, abg gur cnenzrgref gurzfryirf (= enaqbz inevnoyrf).)
Yep.
V zbfgyl nterr jvgu guvf. V nterr gung lbh bayl arrq gur svefg zbzrag bs lbhe cbfgrevbe gb pnyphyngr jung gurl nfx sbe, ohg V guvax gung gurz cebivqvat gur ulcrecnenzrgre nf n fvatyr qngncbvag vf vzcyvpvgyl pynvzvat na vas... (read more)