Suppose you have a property Q which certain objects may or may not have. You've seen many of these objects; you know the prior probability P(Q) that an object has this property.
You have 2 independent measurements of object O, which each assign a probability that Q(O) (O has property Q). Call these two independent probabilities A and B.
What is P(Q(O) | A, B, P(Q))?
To put it another way, expert A has opinion O(A) = A, which asserts P(Q(O)) = A = .7, and expert B says P(Q(O)) = B = .8, and the prior P(Q) = .4, so what is P(Q(O))? The correlation between the opinions of the experts is unknown, but probably small. (They aren't human experts.) I face this problem all the time at work.
You can see that the problem isn't solvable without the prior P(Q), because if the prior P(Q) = .9, then two experts assigning P(Q(O)) < .9 should result in a probability lower than the lowest opinion of those experts. But if P(Q) = .1, then the same estimates by the two experts should result in a probability higher than either of their estimates. But is it solvable or at least well-defined even with the prior?
The experts both know the prior, so if you just had expert A saying P(Q(O)) = .7, the answer must be .7 . Expert B's opinion B must revise the probability upwards if B > P(Q), and downwards if B < P(Q).
When expert A says O(A) = A, she probably means, "If I consider all the n objects I've seen that looked like this one, nA of them had property Q."
One approach is to add up the bits of information each expert gives, with positive bits for indications that Q(O) and negative bits that not(Q(O)).
There are unsupervised methods, if you have unlabeled data, which I suspect you do. I don't know about standard methods, but here are a few simple ideas off the top of my head:
First, you can check if A is consistent with the prior by seeing that average probability it predicts over your data is your prior for Q. If not, there are a lot of possible failure modes, such as your new data being different from the data used to set your prior, or A being wrong or miscalibrated. If I trusted the prior a lot and wanted to fix the problem, I would scale the evidence (the odds ratio of A from the prior) by a constant.
You can apply the same test to the joint prediction. If A and B each produce the right frequency, but their joint prediction does not, then they are correlated. It is probably worth doing this, as a check on your assumption of independence. You might try to correct for this correlation by scaling the joint evidence, the same way I suggested scaling a single test. (Note that if A=B, scaling is the correct answer.)
But if you have many tests and you correct each pair, it is no longer clear how to combine all of them. One simple answers is to drop tests in highly correlated pairs and assume everything that else is independent. To salvage some information rather than dropping tests, you might cluster tests into correlated groups, use scaling to correct within clusters and assume the clusters are independent.