The following happened to me in an IRC chatroom, long enough ago that I was still hanging around in IRC chatrooms. Time has fuzzed the memory and my report may be imprecise.
So there I was, in an IRC chatroom, when someone reports that a friend of his needs medical advice. His friend says that he’s been having sudden chest pains, so he called an ambulance, and the ambulance showed up, but the paramedics told him it was nothing, and left, and now the chest pains are getting worse. What should his friend do?
I was confused by this story. I remembered reading about homeless people in New York who would call ambulances just to be taken someplace warm, and how the paramedics always had to take them to the emergency room, even on the 27th iteration. Because if they didn’t, the ambulance company could be sued for lots and lots of money. Likewise, emergency rooms are legally obligated to treat anyone, regardless of ability to pay.1 So I didn’t quite understand how the described events could have happened. Anyone reporting sudden chest pains should have been hauled off by an ambulance instantly.
And this is where I fell down as a rationalist. I remembered several occasions where my doctor would completely fail to panic at the report of symptoms that seemed, to me, very alarming. And the Medical Establishment was always right. Every single time. I had chest pains myself, at one point, and the doctor patiently explained to me that I was describing chest muscle pain, not a heart attack. So I said into the IRC channel, “Well, if the paramedics told your friend it was nothing, it must really be nothing—they’d have hauled him off if there was the tiniest chance of serious trouble.”
Thus I managed to explain the story within my existing model, though the fit still felt a little forced . . .
Later on, the fellow comes back into the IRC chatroom and says his friend made the whole thing up. Evidently this was not one of his more reliable friends.
I should have realized, perhaps, that an unknown acquaintance of an acquaintance in an IRC channel might be less reliable than a published journal article. Alas, belief is easier than disbelief; we believe instinctively, but disbelief requires a conscious effort.2
So instead, by dint of mighty straining, I forced my model of reality to explain an anomaly that never actually happened. And I knew how embarrassing this was. I knew that the usefulness of a model is not what it can explain, but what it can’t. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.
Your strength as a rationalist is your ability to be more confused by fiction than by reality. If you are equally good at explaining any outcome, you have zero knowledge.
We are all weak, from time to time; the sad part is that I could have been stronger. I had all the information I needed to arrive at the correct answer, I even noticed the problem, and then I ignored it. My feeling of confusion was a Clue, and I threw my Clue away.
I should have paid more attention to that sensation of still feels a little forced. It’s one of the most important feelings a truthseeker can have, a part of your strength as a rationalist. It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading:
Either Your Model Is False Or This Story Is Wrong.
1 And the hospital absorbs the costs, which are enormous, so hospitals are closing their emergency rooms . . . It makes you wonder what’s the point of having economists if we’re just going to ignore them.
2 From McCluskey (2007), “Truth Bias”: “[P]eople are more likely to correctly judge that a truthful statement is true than that a lie is false. This appears to be a fairly robust result that is not just a function of truth being the correct guess where the evidence is weak—it shows up in controlled experiments where subjects have good reason not to assume truth[.]” http://www.overcomingbias.com/2007/08/truth-bias.html .
And from Gilbert et al. (1993), “You Can’t Not Believe Everything You Read”: “Can people comprehend assertions without believing them? [...] Three experiments support the hypothesis that comprehension includes an initial belief in the information comprehended.”
I see two senses (or perhaps not-actually-qualiatively-different-but-still-useful-to-distinguish cases?) of 'I notice I'm confused':
(1) Noticing factual confusion, as in the example in this post. (2) Noticing confusion when trying to understand a concept or phenomenon, or to apply a concept.
Example of (2): (A) "Hrm, I thought I understood what, "Colorless green ideas sleep furiously" means when I first heard it; the words seemed to form a meaningful whole based on the way they fell together. But when I actually try to concretise what that could possibly mean, I find myself unable to, and notice that characteristic pang of confusion."
Example of (2): (B) "Hrm, I thought I understood how flight works because I could form words into intelligent-sounding sentences about things like 'lift' and 'Newton's third law'. But then when I tried to explain why a plane goes up instead of down, my word soup explained both equally well, and I noticed I was confused." (Compare, from the post: "I knew that the usefulness of a model is not what it can explain, but what it can't. A hypothesis that forbids nothing, permits everything, and thereby fails to constrain anticipation.")
It might be useful to identify a third type:
(3) Noticing argumentative confusion. Example of (3): "Hrm, those fringe ideas seem convincing after reading the arguments for them on this LessWrong website. But I still feel a lingering hesitation to adopt the ideas as strongly as lots of these people seem to have, though I'm not sure why." (Confusion as pointer to epistemic learned helplessness)
As in the parent to this comment, (3) is not necessarily qualitatively distinct (e.g. argumentative confusion could be recast as factual confusion: "Hrm,... (read more)