Been pondering; will conflict always exist? A major subquestion: Suppose we all merge utility functions and form an interstellar community devoted to optimizing the merger. It'll probably make sense for us to specialize in different parts of the work, which means accumulating specialist domain knowledge and becoming mutually illegible.
When people have very different domain knowledge, they also fall out of agreement about what the borders of their domains are. (EG: A decision theorist is insisting that they know things about the trajectory of AI that ML researchers don't. ML researchers don't believe them and don't heed their advice.) In these situations, even when all parties are acting in good faith, they know that they wont be able to reconcile about certain disagreements, and it may seem to make sense, from some perspectives, to try to just impose their own way, in those disputed regions.
Would there be any difference between the dispute resolution methods that would be used here, and the dispute resolution methods that would be used between agents with different core values? (war, peace deals, and most saliently,)
Would the parties in the conflict use war proxies that take physical advantages in different domains into account? (EG: Would the decision theorist block ML research in disputed domains where their knowledge of decision theory would give them a force advantage?)
Aumann's agreement theorem which is discussed in the paper 'Are Disagreements Honest?' by Hanson and Cowen suggests that perfectly rational agents (updating via Bayes theorem) should not disagree in this fashion, even if their life experiences were different, provided that their opinions on all topics are common knowledge and they have common priors. This is often framed as saying that such agents cannot 'agree to disagree'.
I'm a bit hazy on the details, but broadly, two agents with common priors but different evidence (ie. different life experiences or expertise) can share their knowledge and mutually update based on their different knowledge, eventually converging on an agreed probability distribution.
Of course, humans are not perfectly rational so this rarely happens (this is discussed in the Hanson/Cowen paper). There are some results which seems to suggest you can relax some assumptions of Aumann's theorem to have more realistic assumptions and still get similar results. Scott Aaronson showed that Aumann's theorem holds (to a high degree) even when the agreement of agents over priors isn't perfect and the agents can exchange only limited amounts of information.
Maybe the agents who are alive in the future will not be perfectly rational, but I guess we can hope that they might be rational enough to converge close enough to agreement that they don't fight on important issues.
I think this is a wrong picture to have in mind for Aumannian updating. It's about pooling evidence, and sometimes you can end up with more extreme views than you started with. While the exact ... (read more)