If we want to apply our brains more effectively to the pursuit of our chosen objectives, we must commit to the hard work of understanding how brains implement cognition. Is it enough to strive to "overcome bias"? I've come across an interesting tidbit of research (which I'll introduce in a moment) on "perceptual pop-out", that hints it is not enough.
"Cognition" is a broad notion; we can dissect it into awareness, perception, reasoning, judgment, feeling... Broad enough to encompass what I'm coming to call "pure reason": our shared toolkit of normative frameworks for assessing probability, evaluating utility, guiding decision, and so on. Pure reason is one of the components of rationality, as this term is used here, but it does not encompass all of rationality, and we should beware the many Myths of Pure Reason. The Spock caricature is one; by itself enough cause to use the word "rational" sparingly, if at all.
Or the idea that all bias is bad.
It turns out, for instance, that a familiar bugaboo, confirmation bias, might play an important role in perception. Matt Davis at Cambridge Medical School has crafted a really neat three-part audio sample based showcasing one of his research topics. The first and last part of the sample are exactly the same. If you are at all like me, however, you will perceive them quite differently.
Here is the audio sample (mp3). Please listen to it now.
Notice the difference? Matt Davis, who has researched these effects extensively, refers to them as "perceptual pop-out". The link with confirmation bias is suggested by Jim Carnicelli: " Once you have an expectation of what to look for in the data, you quickly find it."
In Probability Theory, E.T. Jaynes notes that perception is "inference from incomplete information"; and elsewhere adds:
Kahneman & Tversky claimed that we are not Bayesians, because in psychological tests people often commit violations of Bayesian principles. [...] People are reasoning to a more sophisticated version of Bayesian inference than [Kahneman and Tversky] had in mind. [...] We would expect Natural Selection to produce such a result: after all, any reasoning format whose results conflict with Bayesian inference will place a creature at a decided survival disadvantage.
There is an apparent paradox between our susceptibility to various biases, and the fact that these biases are prevalent precisely because they are part of a cognitive toolkit honed over a long evolutionary period, suggesting that each component of that toolkit must have worked - conferred some advantage. Bayesian inference, claims Jaynes, isn't just a good move - it is the best move.
However, these components evolved in specific situations; the hardware kit that they are part of was never intended to run the software we now know as "pure reason". Our high-level reasoning processes are "hijacking" these components for other purposes. The same goes of our consciousness, which is also a patched-together hack on top of the same hardware.
There, by the way, is why Dennett's work on consciousness is important, and should be given a sympathetic exposition here rather than a hatchet job. (This post is intended in part as a tentative prelude to tackling that exposition.)
We are not AIs, who, when finally implemented, will (putatively) be able to modify their own source code. The closest we can come to that is to be aware of what our reasoning is put together from, which include various biases that exist for a reason, and to make conscious choices as to how we use these components.
Bottom line : understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.
I think Kutta is right to suggest that we use the term bias for "a pattern of errors". What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.
If it is indeed the case that the confirmation bias shown in cognitive studies is produced by the same processes that our perception uses, then confirmation bias could be a good thing for a stock trader, if it lets them identify patterns in market data which are actually there.
The audio sample above would be a good analogy. First you look at the market data and see just a jumble of numbers, up and down, up and down. Then you go, "Hey, doesn't this look like what I've already seen in insider trading cases ?" (Or whatever would make a plausible example - I don't know much about stock trading.) And now the market data seems to make a lot of sense.
In this hypothesis (and keep it mind it is only a hypothesis) confirmation bias helps you make sense of the data. Being aware of confirmation bias as a source of error reminds you to double-check your initial idea, using more reliable tools (say, mathematical) if you have them.
The "Heuristics and Biases" party line is to use "heuristic" to refer to the underlying mechanism. For example, "the representativeness heuristic can lead to neglect of base-rate."