So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?
See, "knowledge" is of something that is true, or at least actually interpreted input. So if someone can't have knowledge of it, that implies i's true and one merely can't know it. If there can't be common knowledge, that implies that at least one can't know the true thing. But the thing in question, "that they will disagree", is false, right?
I do not understand what the words in the sentence mean. It seems to read:
"At no point can two ideal reasoners both know true fact X, where true fact X is that they will disagree on posteriors, and that each knows that they will disagree on posteriors, etc."
But the theorem is that they will not disagree on posteriors...
So "at no point in a conversation can Bayesians have common knowledge that they will disagree," means "'Common knowledge' is a far stronger condition than it sounds," and nothing more and nothing less?
No, for a couple of reasons.
First, I misunderstood the context of that quote. I thought that it was from Wei Dai's post (because he was the last-named source that you'd quoted). Under this misapprehension, I took him to be pointing out that common knowledge of anything is a fantastically strong condition, and so, in particular, common...
There are many pleasant benefits of improved rationality:
I'd like to mention two other benefits of rationality that arise when working with other rationalists, which I've noticed since moving to Berkeley to work with Singularity Institute (first as an intern, then as a staff member).
The first is the comfort of knowing that people you work with agree on literally hundreds of norms and values relevant to decision-making: the laws of logic and probability theory, the recommendations of cognitive science for judgment and decision-making, the values of broad consequentialism and x-risk reduction, etc. When I walk into a decision-making meeting with Eliezer Yudkowsky or Anna Salamon or Louie Helm, I notice I'm more relaxed than when I walk into a meeting with most people. I know that we're operating on Crocker's rules, that we all want to make the decisions that will most reduce existential risk, and that we agree on how we should go about making such a decision.
The second pleasure, related to the first, is the extremely common result of reaching Aumann agreement after initially disagreeing. Having worked closely with Anna on both the rationality minicamp and a forthcoming article on intelligence explosion, we've had many opportunities to Aumann on things. We start by disagreeing on X. Then we reduce knowledge asymmetry about X. Then we share additional arguments for multiple potential conclusions about X. Then we both update from our initial impressions, also taking into account the other's updated opinion. In the end, we almost always agree on a final judgment or decision about X. And it's not that we agree to disagree and just move forward with one of our judgments. We actually both agree on what the most probably correct judgment is. I've had this experience literally hundreds of times with Anna alone.
Being more rational is a pleasure. Being rational in the company of other rationalists is even better. Forget not the good news of situationist psychology.