Been pondering; will conflict always exist? A major subquestion: Suppose we all merge utility functions and form an interstellar community devoted to optimizing the merger. It'll probably make sense for us to specialize in different parts of the work, which means accumulating specialist domain knowledge and becoming mutually illegible.
When people have very different domain knowledge, they also fall out of agreement about what the borders of their domains are. (EG: A decision theorist is insisting that they know things about the trajectory of AI that ML researchers don't. ML researchers don't believe them and don't heed their advice.) In these situations, even when all parties are acting in good faith, they know that they wont be able to reconcile about certain disagreements, and it may seem to make sense, from some perspectives, to try to just impose their own way, in those disputed regions.
Would there be any difference between the dispute resolution methods that would be used here, and the dispute resolution methods that would be used between agents with different core values? (war, peace deals, and most saliently,)
Would the parties in the conflict use war proxies that take physical advantages in different domains into account? (EG: Would the decision theorist block ML research in disputed domains where their knowledge of decision theory would give them a force advantage?)
Good point! Noticeably, some of your examples are 'one-way': one party updated while the other did not. In the case of Google/Twitter and the museum, you updated but they didn't, so this sounds like standard Bayesian updating, not specifically Aumann-like (though maybe this distinction doesn't matter, as the latter is a special case of the former).
When I wrote the answer, I guess I was thinking about Aumann updating where both parties end up changing their probabilities (ie. Alice starts with a high probability of some proposition P and Bob starts with a low probability for P and, after discussing their disagreement, they converge to a middling probability). This didn't seem to me to be as common among humans.
In the example with your Dad, it also seems one-way: he updated and you didn't. However, maybe the fact he didn't know there was a flood would have caused you to update slightly, but this update would be so small that it was negligible. So I guess you are right and that would count as an Aumann agreement!
Your last paragraph is really good. I will ponder it...