Person A and B hold a belief about proposition X.
Person A has purposively sought out, and updated, on evidence related to X since childhood.
Person B has sat on her couch and played video games.
Yet both A and B have arrived at the same degree-of-belief in proposition X.
Does the Bayesian framework equip its adherents with an adequate account of how Person A should be more confident in her conclusion than Person B?
The only viable answer I can think of is that every reasoner should multiply every conclusion with some measure of epistemic confidence, and re-normalize. But I have not yet encountered such a pervasive account of confidence-measurement from leading Bayesian theorists.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
If X is just a binary proposition that can be true or false once and for all, and A and B have arrived at the same degree-of-belief, they are equally confident. A has updated on evidence related to X since childhood, and found that it's perfectly balanced in either direction. The only way A can be said to be "more confident" than B is that A has seen a lot of evidence already, so she won't update her conclusion upon seeing the same evidence again; on the other hand, all evidence is new to B.
Things get more interesting if X is some sort of random variable. Let's say we have a bag of black and white marbles. A has seen people draw from the bag 100 times, and 50 of them ended up with white marbles. B only knows the general idea. Now, both of them expect a white marble to come up with 50% probability. But actually, they each have a probability distribution on the fraction of white marbles in the bag. The mean is 1/2 for both of them, but the distribution is flat for B, and has a sharp peak at 1/2 for A. This is what determines how confident they are. If C comes along and says "well, I drew a white marble", then B will update to a new distribution, with mean 2/3, but A's distribution will barely shift at all.
The example of stochastic evidence is indeed interesting. But I find myself stuck on the first example.
If a new reasoner C were to update Pc(X) based on the testimony of A, and had an extremely high degree of confidence in her ability to generate correct opinions, he would presumably strongly gravitate towards Pa(X).
Alternatively, suppose C is going to update Pc(X) based on the testimony of B. Further, C has evidence outlining B's apathetic proclivities. Therefore, he would presumably only weakly gravitate towards Pb(X).
The above account may be shown to be confused. But if it is not, why can C update based on evidence of infomed-belief, but A and B are precluded from similarly reflecting on their own testimony? Or, if such introspective activity is not non-normative, should they not strive to perform such an activity consistently?