I've written up a 2-page explanation and proof of Aumann's agreement theorem. Here is a direct link to the pdf via Dropbox.
The proof in Aumann's original paper is already very short and accessible. (Wei Dai gave an exposition closely following Aumann's in this post.) My intention here was to make the proof even more accessible by putting it in elementary Bayesian terms, stripping out the talk of meets and joins in partition posets. (Just to be clear, the proof is just a reformulation of Aumann's and not in any way original.)
I will appreciate any suggestions for improvements.
Update: I've added an abstract and made one of the conditions in the formal description of "common knowledge" explicit in the informal description.
Update: Here is a direct link to the pdf via Dropbox (ht to Vladimir Nesov).
Update: In this comment, I explain why the definition of "common knowledge" in the write-up is the same as Aumann's.
Update 2020-05-23: I fixed the Dropbox link and removed the Scribd link.
Maybe this is a good place to ask something I wonder about: does Aumann's agreement theorem really have practical significance for disputes between people?
It assumes that the agents involved are Bayesian reasoners, have the same priors, and have common knowledge of each other's posteriors. The last condition might hold for people who disagree about something (although arguers routinely misinterpret each other, so maybe even that's too optimistic), but I'd expect people in a serious argument to have different priors most of the time, and nobody is a perfect Bayesian reasoner. As far as I can tell, that means two of the theorem's prerequisites are routinely violated when people disagree, and the one that's left over is often arguable too.
This makes me sceptical when I see people refer to "Aumanning" or the irrationality of agreeing to disagree. Still, there are two obvious ways I could be going wrong here:
The theorem's Wikipedia page references papers by Scott Aaronson & Robin Hanson. Aaronson's doesn't sound relevant (it seems to be about the rate of agreement, not whether eventual agreement is assured), but Hanson's looks like it might drain the force out of the common priors assumption by arguing that rational Bayesians should always have the same priors.
I haven't read Hanson's paper, but even if I assume that I don't have to worry about the equal priors assumption, I still have to contend with the assumption that the arguers are Bayesian. I can only think of one way for someone in an argument to be sure that the others calculated their posteriors Bayesianly: by sitting down and explicitly re-deriving them from everybody's likelihoods. But that defeats the point of the theorem! I feel like I'm missing something here but can't see what.
There's a discussion of practical implications of AAT in my post.