Eliezer described "positive bias" (which I'll rename "positive test bias" for reasons explained in Unnamed's comment below) in an LW post and an HPMOR chapter. According to him, dealing with that bias requires a kind of mental gymnastics that doesn't come naturally to most people: "twisty negative thinking" or "flinching toward zero". You're supposed to devise tests that come out false if your hypothesis is true. It's a bit confusing.
I think there's a simpler way to think about it. Positive test bias is just our bias toward strong hypotheses. You can deal with it by asking yourself, how can I test if my hypothesis is too strong?
- The LW post about the bias mentions the Wason 2-4-6 task. If you're told that the number sequence 2-4-6 has property X, it's tempting to guess that property X means "ascending arithmetic progression". To test if that hypothesis is too strong, you need to try some other sequences that are arithmetic progressions but not ascending, and some that are ascending but not arithmetic progressions. Easy!
- The HPMOR chapter describes Hermione spilling some soda on her robe, and a moment later it mysteriously becomes clean again. Hermione comes up with a hypothesis that her robe is magically self-cleaning. To test if that hypothesis is too strong, she can try spilling something else on her robe. There's no need for any counterintuitive thinking.
That technique is useful in other areas as well. For example, I often get carried away when writing posts, and end up with a draft full of wide-reaching conclusions. But then I ask myself, doesn't that sound a bit too strong? So I look for counterexamples, and the point I'm trying to make either dissolves or becomes much more robust.
Our bias toward strong hypotheses is especially harmful in politics. We will defend a hypothesis like "group X is to blame for everything" for its explanatory power, never noticing the real problem—that it's too strong. We'd all benefit from recognizing when it happens, and weakening our hypotheses until they match reality.
What is that quote of Scott's... Something about how the sequences obsolete themselves. And that he remembers the sequences being full of all these great insights about difficult topics - but when he goes back and rereads them, it's all just so obvious.
You probably see where I'm going with this. It seems entirely possible that when you say "oh, it's easy, you just notice when you're making a hypothesis that might be too strong and then come up with a way to test it," you are in fact doing the complete content of that sequence post that seeemed insightful way back when, it's just that it's easy to you now.
That's part of it, but also Eliezer sometimes makes things sound more complicated than they are. This exchange is a nice example.
Eliezer: And if you think you can explain the concept of "systematically underestimated inferential distances" briefly, in just a few words, I've got some sad news for you...
enye-word: "This is going to take a while to explain." Did I do it? Did I win rationalism?!