(nods) Examples do indeed help.
Suppose agent A has access to observations X1..X10, on the basis of which A concludes a 1-degree temperature rise.
Suppose agent B has access to observations X1..X9, on the basis of which A concludes a 2-degree temperature rise, and A and B are both perfect rationalists whose relevant priors are otherwise completely shared and whose posterior probabilities are perfectly calibrated to the evidence they have access to.
It follows that if B had access to X10, B would update and conclude a 2-degree rise. But neither A nor B know that.
In this example, A and B aren't justified in having the conversation you describe, because A's estimate already takes into account all of B's evidence, so any updating A does based on the fact of B's estimate in fact double-counts all of that evidence.
But until A can identify what it is that B knows and doesn't know, A has no way of confirming that. If they just share their estimates, they haven't done a fraction of the work necessary to get the best conclusion from the available data... in fact, if that's all they're going to do, A was better off not talking to B at all.
Of course, one might say "But we're supposed to assume common priors, so we can't say that A has high confidence in X10 while B doesn't." But in that case, I'm not sure what caused A and B to arrive at different estimates in the first place.
I don't think Aumann's agreement theorem is about getting "the best conclusion from the available data". It is about agreement. The idea is not that an exchange produces a the most accurate outcome from all the evidence held by both parties - but rather that their disagreements do not persist for very long.
This post questions the costs of reaching such an agreement. Conventional wisdom is as follows:
...But two key questions went unaddressed: first, can the agents reach agreement after a conversation of reasonable length? Second, can the computa
-