No; in what I wrote "resolving a disagreement" means "agreeing to hold the same position, or something very close to it".
Deciding "cheaply" that you'll both set p=1/2 (note: I assume that's what you mean by -3dB here, because the other interpretations I can think of don't amount to "agreeing to disagree") is no more rational than (even the least rational version of) "agreeing to disagree".
If the evidence is very evenly balanced then of course you might end up doing that not-so-cheaply, but in such cases what more often happens is that you look at lots of evidence and see -- or think you see -- a gradual accumulation favouring one side.
Of course you could base your position purely on the number of people on each side of the issue, and then you might be able to reach p=1/2 (or something near it) cheaply and not entirely unprincipledly. Unfortunately, that procedure also tells you that Pr(Christianity) is somewhere around 1/4, a conclusion that I think most people here agree with me in regarding as silly. You can try to fix that by weighting people's opinions according to how well they're informed, how clever they are, how rational they are, etc. -- but then you once again have a lengthy, difficult and subjective task that you might reasonably worry will end up giving you a confident wrong answer.
I should perhaps clarify that what I mean by "wouldn't be much more likely to leave us both right than to leave us both wrong" is: for each of the two people involved, who (at the outset) have quite different opinions, Pr(reach agreement on wrong answer | reach agreement) is quite high.
And, once again for the avoidance of doubt, I am not taking "reach agreement" to mean "reach agreement that one definite position or another is almost certainly right". I just think that empirically, in practice, when people reach agreement with one another they more often do that than agree that Pr(each) ~= 1/2: I disagree with you about "the most common result" unless "cheaply" is taken in a sense that makes it irrelevant when discussing what rational people should do.
Recent brainstorming sessions at SIAI (with participants including Anna, Carl, Jasen, Divia, Will, Amy Willey, and Andrew Critch) have started to produce lists of rationality skills that we could potentially try to teach (at Rationality Boot Camp, at Less Wrong meetups, or similar venues). We've also been trying to break those skills down to the 5-second level (step 2) and come up with ideas for exercises that might teach them (step 3) although we haven't actually composed those exercises yet (step 4, where the actual work takes place).
The bulk of this post will mainly go into the comments, which I'll try to keep to the following format: A top-level comment is a major or minor skill to teach; upvote this comment if you think this skill should get priority in teaching. Sub-level comments describe 5-second subskills that go into this skill, and then third-level comments are ideas for exercises which could potentially train that 5-second skill. If anyone actually went to the work of composing a specific exercise people could run through, that would go to the fourth-level of commenting, I guess. For some major practicable arts with a known standard learning format like "Improv" or "Acting", I'll put the exercise at the top and guesses at which skills it might teach below. (And any plain old replies can go at any level.)
I probably won't be able to get to all of what we brainstormed today, so here's a PNG of the Freemind map that I generated during our session.