"Rational people can't agree to disagree" is an oversimplification. Rational people can perfectly well reach a conclusion of the form: "Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong. We choose, instead, to leave the matter unresolved until either it matters more or we see better prospects of resolving it."
Imperfectly rational people who are aware of their imperfect rationality (note: this is in fact the nearest any of us actually come to being rational people) might also reasonably reach a conclusion of this form: "Perhaps clear enough thinking on both sides would suffice to let us resolve this. However, it's apparent that at least one of us is currently sufficiently irrational about it that trying to reach agreement poses a real danger of spoiling the good relations we currently enjoy, and while clearly that irrationality is a bad thing it doesn't seem likely that trying to resolve our current disagreement now is the best way to address it, so let's leave it for now."
I suspect (with no actual evidence) that when two reasonably-rational people say they're agreeing to disagree, what they mean is often approximately one of the above or a combination thereof, and that they're often wise to "agree to disagree". The fact that there are theorems saying that two perfect rationalists who care about nothing more than getting the right answer to the question they're currently disputing won't "agree to disagree" seems to me to have little bearing on this.
Eliezer, if you're reading this: You may remember that a while back on OB you and Robin Hanson discussed the prospects of rapidly improving artificial intelligence in the nearish future. By no means did you resolve your differences in that discussion. Would it be fair to characterize the way it ended as "agreeing to disagree"? From the outside, it sure looks like that's what it amounted to, whatever you may or may not have said to one another about it. Perhaps you and/or Robin might say "Yeah, but the other guy isn't really rational about this". Could be, but if the level of joint rationality required for "can't agree to disagree" is higher than that of {Eliezer,Robin} then it's not clear how widely applicable the principle "rational people can't agree to disagree" really is. (Note for the avoidance of doubt: The foregoing is not intended to imply that Eliezer and Robin are equally rational; I do not intend to make any further comment on my opinions, if any, on that matter.)
Our disagreement on this matter is a consequence of our disagreement on other issues that would be very difficult to resolve, and for which there are many apparently intelligent, honest and well informed people on both sides. Therefore, it seems likely that reaching agreement on this issue would take an awful lot of work and wouldn't be much more likely to leave us both right than to leave us both wrong.
You say that as if resolving a disagreement means agreeing to both choose one side or the other. The most common result of cheaply resolving a disagreement is not "both right" or "both wrong", but "both -3 decibels."
Recent brainstorming sessions at SIAI (with participants including Anna, Carl, Jasen, Divia, Will, Amy Willey, and Andrew Critch) have started to produce lists of rationality skills that we could potentially try to teach (at Rationality Boot Camp, at Less Wrong meetups, or similar venues). We've also been trying to break those skills down to the 5-second level (step 2) and come up with ideas for exercises that might teach them (step 3) although we haven't actually composed those exercises yet (step 4, where the actual work takes place).
The bulk of this post will mainly go into the comments, which I'll try to keep to the following format: A top-level comment is a major or minor skill to teach; upvote this comment if you think this skill should get priority in teaching. Sub-level comments describe 5-second subskills that go into this skill, and then third-level comments are ideas for exercises which could potentially train that 5-second skill. If anyone actually went to the work of composing a specific exercise people could run through, that would go to the fourth-level of commenting, I guess. For some major practicable arts with a known standard learning format like "Improv" or "Acting", I'll put the exercise at the top and guesses at which skills it might teach below. (And any plain old replies can go at any level.)
I probably won't be able to get to all of what we brainstormed today, so here's a PNG of the Freemind map that I generated during our session.