Suppose you have a property Q which certain objects may or may not have. You've seen many of these objects; you know the prior probability P(Q) that an object has this property.
You have 2 independent measurements of object O, which each assign a probability that Q(O) (O has property Q). Call these two independent probabilities A and B.
What is P(Q(O) | A, B, P(Q))?
To put it another way, expert A has opinion O(A) = A, which asserts P(Q(O)) = A = .7, and expert B says P(Q(O)) = B = .8, and the prior P(Q) = .4, so what is P(Q(O))? The correlation between the opinions of the experts is unknown, but probably small. (They aren't human experts.) I face this problem all the time at work.
You can see that the problem isn't solvable without the prior P(Q), because if the prior P(Q) = .9, then two experts assigning P(Q(O)) < .9 should result in a probability lower than the lowest opinion of those experts. But if P(Q) = .1, then the same estimates by the two experts should result in a probability higher than either of their estimates. But is it solvable or at least well-defined even with the prior?
The experts both know the prior, so if you just had expert A saying P(Q(O)) = .7, the answer must be .7 . Expert B's opinion B must revise the probability upwards if B > P(Q), and downwards if B < P(Q).
When expert A says O(A) = A, she probably means, "If I consider all the n objects I've seen that looked like this one, nA of them had property Q."
One approach is to add up the bits of information each expert gives, with positive bits for indications that Q(O) and negative bits that not(Q(O)).
The way I interpreted the claim of independence is that the verdicts of the experts are not correlated once you conditionalize on Q. If that is the case, then DanielLC's procedure gives the correct answer.
To see this more explicitly, suppose that expert A's verdict is based on evidence Ea and expert B's verdict is based on evidence Eb. The independence assumption is that P(Ea & Eb|Q) = P(Ea|Q) * P(Eb|Q).
Since we know the posteriors P(Q|Ea) and P(Q|Eb), and we know the prior of Q, we can calculate the likelihood ratios for Ea and Eb. The independence assumption allows us to multiply these likelihood ratios together to obtain a likelihood ratio for the combined evidence Ea & Eb. We then multiply this likelihood ratio with the prior odds to obtain the correct posterior odds.
You can write that, and it's likely possible in some cases, but step back and think, Does this really make sense to say in the general case?
I just don't think so. The whole problem with mixture of experts, or combining multiple data sources, is that the marginals are not in general independent.