2 min read25th Jan 201031 comments

15

If we want to apply our brains more effectively to the pursuit of our chosen objectives, we must commit to the hard work of understanding how brains implement cognition. Is it enough to strive to "overcome bias"? I've come across an interesting tidbit of research (which I'll introduce in a moment) on "perceptual pop-out", that hints it is not enough.

"Cognition" is a broad notion; we can dissect it into awareness, perception, reasoning, judgment, feeling... Broad enough to encompass what I'm coming to call "pure reason": our shared toolkit of normative frameworks for assessing probability, evaluating utility, guiding decision, and so on. Pure reason is one of the components of rationality, as this term is used here, but it does not encompass all of rationality, and we should beware the many Myths of Pure Reason. The Spock caricature is one; by itself enough cause to use the word "rational" sparingly, if at all.

Or the idea that all bias is bad.

It turns out, for instance, that a familiar bugaboo, confirmation bias, might play an important role in perception. Matt Davis at Cambridge Medical School has crafted a really neat three-part audio sample based showcasing one of his research topics. The first and last part of the sample are exactly the same. If you are at all like me, however, you will perceive them quite differently.

Here is the audio sample (mp3). Please listen to it now.

Notice the difference? Matt Davis, who has researched these effects extensively, refers to them as "perceptual pop-out". The link with confirmation bias is suggested by Jim Carnicelli: " Once you have an expectation of what to look for in the data, you quickly find it."

In Probability Theory, E.T. Jaynes notes that perception is "inference from incomplete information"; and elsewhere adds:

Kahneman & Tversky claimed that we are not Bayesians, because in psychological tests people often commit violations of Bayesian principles. [...] People are reasoning to a more sophisticated version of Bayesian inference than [Kahneman and Tversky] had in mind. [...] We would expect Natural Selection to produce such a result: after all, any reasoning format whose results conflict with Bayesian inference will place a creature at a decided survival disadvantage.

There is an apparent paradox between our susceptibility to various biases, and the fact that these biases are prevalent precisely because they are part of a cognitive toolkit honed over a long evolutionary period, suggesting that each component of that toolkit must have worked - conferred some advantage. Bayesian inference, claims Jaynes, isn't just a good move - it is the best move.

However, these components evolved in specific situations; the hardware kit that they are part of was never intended to run the software we now know as "pure reason". Our high-level reasoning processes are "hijacking" these components for other purposes. The same goes of our consciousness, which is also a patched-together hack on top of the same hardware.

There, by the way, is why Dennett's work on consciousness is important, and should be given a sympathetic exposition here rather than a hatchet job. (This post is intended in part as a tentative prelude to tackling that exposition.)

We are not AIs, who, when finally implemented, will (putatively) be able to modify their own source code. The closest we can come to that is to be aware of what our reasoning is put together from, which include various biases that exist for a reason, and to make conscious choices as to how we use these components.

Bottom line : understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.

 

New to LessWrong?

New Comment
31 comments, sorted by Click to highlight new comments since: Today at 9:35 AM

understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.

Putting that knowledge to use is great, but aside from that shouldn't we just still reject bias as "evil" for the reason they are systematic errors in reasoning?

Sure, the brain circuitry that produces a bias in a certain context can be adaptive or promote correctness in another. I don't think anyone doubts here that every bias has an evolutionary reason and history behind it. Evolutionary trade-offs and adaptive advantages can both lead to the rise of things we call biases now. What worked in an ancestral environment might not work now, etc.

"Rejecting" a bias as evil ideally requires that the bias is a compactly definable, reliable pattern over many outcomes. Maybe we'll acquire some extra knowledge about mechanisms and brain circuitry behind some biases, and we'll be able to collapse several biases into one, or split them, or just generally refine our understanding, thus aiding debiasing. But I think that a bias itself, as an observed pattern of errors, is bad and stays bad.

Please take note of the wording: "reject all bias as evil".

That is, lumping all demonstrated instances of bias into a general category of "ugh, I should avoid doing this" is likely to keep us from looking into the interesting adaptive properties of specific biases.

When confronted with a specific bias, the useful thing to do is recognize that it introduces error in particular contexts but may remain adaptive in other contexts. We will then strive to adopt prescriptive approaches, selected according to context, which help correct for observed bias and bring our cognition into line with the desired normative frameworks - which themselves differ from context to context.

I meant that it's the specific underlying mechanisms that can produce a bias or promote correctness; a bias is just a surface level fact about what errors people tend to make. Also, lots of biases are specific to a certain mental task and cannot be interpreted in foreign contexts. It's not guaranteed either that a current "bias" concept will not be superseded by additional knowledge. Therefore, the ideal basis of debiasing is most likely a detailed understanding of psychology/neurology; which is a point you expressed, and I agree.

I think this disagreement comes down to the definition of "bias", which Wikipedia defines as "a tendency or preference towards a particular perspective, ideology or result, when the tendency interferes with the ability to be impartial, unprejudiced, or objective." If a bias helps you make fewer errors, I would argue it's not a bias.

Maybe it is clearer if we speak of behaviors rather than biases. A given behavior (e.g. tendency to perceive what you were expecting to perceive) may make you more biased in certain contexts, and more rational in others. It might be advantageous to keep this behavior if it helps you more than it hurts you, but to the extent that you can identify the situations where the behavior causes errors, you should try to correct it.

Great audio clip, BTW.

I understand and accept the premise that biases can be adaptive, and therefore beneficial to success and not evil.

You bring up the idea of normative frameworks, which I like, but don't expound upon the idea. Which biases, in which frameworks, lead to success? Is this something we can speculate about?

For example, what biases and what framework would be successful for a stock market trader?

I think Kutta is right to suggest that we use the term bias for "a pattern of errors". What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.

If it is indeed the case that the confirmation bias shown in cognitive studies is produced by the same processes that our perception uses, then confirmation bias could be a good thing for a stock trader, if it lets them identify patterns in market data which are actually there.

The audio sample above would be a good analogy. First you look at the market data and see just a jumble of numbers, up and down, up and down. Then you go, "Hey, doesn't this look like what I've already seen in insider trading cases ?" (Or whatever would make a plausible example - I don't know much about stock trading.) And now the market data seems to make a lot of sense.

In this hypothesis (and keep it mind it is only a hypothesis) confirmation bias helps you make sense of the data. Being aware of confirmation bias as a source of error reminds you to double-check your initial idea, using more reliable tools (say, mathematical) if you have them.

I think Kutta is right to suggest that we use the term bias for "a pattern of errors". What is confusing is that we also tend to refer by that term to the underlying process which produces the pattern, and that such a process is beneficial or detrimental depending on what we use it for.

The "Heuristics and Biases" party line is to use "heuristic" to refer to the underlying mechanism. For example, "the representativeness heuristic can lead to neglect of base-rate."

I get the "normative" jargon from Baron's Thinking and Deciding. In studies of decision making you can roughly break the work down into three categories; descriptive, normative and prescriptive.

The first consists of studying how people actually think; that's typically the work of the Kahneman and Tverskys of the field.

The second consists of identifying what rules must hold if decisions are to satisfy some desirable properties; for instance Expected Utility is normative in evaluating known alternatives, Probability Theory is normative in assessing probabilities (and even turns out to be normative in comparing scientific hypotheses).

The third is about what we can do to bring our thinking in line with some normative framework. "You should compute the Expected Utility of all your options and pick the highest valued one" may not be an appropriate prescription, for instance if you are in a situation where you have many options but too little time to consciously work out utilities; or in a situation where you have too few options and should instead work on figuring out additional options.

This might be worth writing up as a top-level post for future reference and further discussion.

There is an apparent paradox between our susceptibility to various biases, and the fact that these biases are prevalent precisely because they are part of a cognitive toolkit honed over a long evolutionary period, suggesting that each component of that toolkit must have worked - conferred some advantage. Bayesian inference, claims Jaynes, isn't just a good move - it is the best move.

What did work is different than what will work or what will work better. I am not really disagreeing with you, I am just pointing out that the tools we used to get to this point are not necessarily the tools we need to get past this point.

The audio clip was very cool, by the way. I was surprised that you gave away the secret and it still worked as intended. I don't know enough about confirmation bias to continue the discussion past this point, however, so I will leave that to someone else. :)

You have a point: adaptations evolved in past environments may fail in new environments.

However, we're talking about the apparatus of perception, and the specific example is about speech perception: the "environment", i.e. the fact that we must make sense of the spoken word - plus the properties of speech as sound - that hasn't changed much since that faculty evolved.

What has changed, if Carnicelli's hunch is correct, is the use to which we put this particular module of our perceptual apparatus. That is, we now use it to "reason" with, and the hunch helps explain why our reasoning is often flawed.

But much like the story of the waltzing bear, the wonder isn't that we reason well or badly, the wonder is that we reason at all.

Thus, if we can understand how apparently low-level things like perceptual modules can be repurposed to cobble together something apparently high-level like consciousness and abstract reasoning, we have some hope of learning enough about the general shape of mind-space to dissolve the question of intelligence; or to use the prevalent metaphor, of learning enough about how birds fly that we're able to build a plane.

Has anyone else noticed that the priming persists? I didn't understand the weird sound the first time I heard it, then got it after hearing the rest of the audio clip. I've listened to the clip again several days later, and I still understand the weird sound effects.

Yep, that's part of what's interesting about it. I think that as long as you remember the sentence, your expectations are still doing their top-down work on the samples; and since the effect itself is so memorable, you're likely to remember them for a long time.

"As long as you remember the sentence" is a bit fuzzy, though: I remember encountering this particular sentence when I encountered this effect before (more than 14 but less than 24 months ago), but it still sounded garbled to me the first time I heard it.

Hmm. I got the meaning of the first section of the clip the first time I heard it. OTOH, that was probably because I looked at the URL first, and so I was primed to look at the content that way.

The first and last parts sounded exactly the same to me.

However, what "meaning" are you talking about? I got no meaning from the sound effects.

The recording is:

  1. Squiggly noises
  2. An English sentence
  3. The same squiggly noises again

Before hearing the sentence, the squiggly noises just sound like squiggly noises. After hearing the sentence, the squiggly noises sound (to me and presumably most people) like a distorted version of the sentence. The only reason the squiggly noises are there twice is so you don't have to replay the recording to hear the effect.

This blew me away the first time I heard it, and I already knew what pareidolia was.

This isn't actually a case of pareidolia, as the squiggly noises (they call it "sine wave speech") are in fact derived from the middle recording, using an effect that sounds, to me, most like an extremely low bitrate mp3 encoding. Reading up on how they produce the effect, it is in fact a very similar process to mp3 encoding. (Perhaps inspired by it? I believe most general audio codecs work on very similar basic principles.)

So it's the opposite of pareidolia. It's actually meaningful sound, but it looks random at first. Maybe we should call it ailodierap.

This isn't actually a case of pareidolia

True; I suppose it's a demonstration of the thing that makes pareidolia possible -- the should-be-obvious-but-isn't fact that pattern recognition takes place in the mind.

I wish it were two recordings, so you could listen to the squiggly noises more than once before hearing the sentence.

I ran into a set of these once before, and while it didn't let me listen to any one noise more than once before hearing the related speech, after about 4 or 5 noise+speech+noise sets I started being able to recognize the words in the noise the first time through. So it does seem to be learnable, if that's what you were curious about.

I'm curious how much of the change is because you've heard the sentence in "plaintext", and how much because you're hearing the squiggly version a second time.

You didn't hear the second part as a repeat of the speech? Are you not a native English speaker?

No, I didn't. I am a native English speaker from the Midwest part of America. I listened to it multiple times without hearing any speech in either of the sound effects.

After reading your comment, I listened to the audio again and now both audio samples do sound like a repeat of the speech. At no point did the audio samples sound different from one another, though.

The woman does have an English rather than American accent. I'm from England originally and the effect was quite dramatic the first time I listened to it: meaningless noise, then speech, then completely intelligible speech (the repeat of the original meaningless noise). The second time I listened to it some time later (after reading your comment) I could understand the speech in the first sound but it was clearer in the second. Listening to it again shortly afterwards the first and last sound both sounded like speech and sounded much the same as each other. I wonder whether the accent is a factor?

That's very interesting. Can you try some of the other samples from Matt Davis' page and report on your experiences?

When I listened to some of those the first time I was, as luck would have it, in a slightly noisy environment, so that I couldn't quite catch some bits of the English text the first time around; the corresponding parts of the "sine wave speech" remained obscure for me until I had listened again to the clear text.

So for me the effect seems to be stronger rather than weaker as a result of the speaker's accent plus English being a second language. I'm really puzzled as to why the effect might be weaker for you. Any ideas? Are you cognitively atypical in any way?

One reason I wished it had been two samples rather than one is that I thought I heard speech in the noise the first time, and wanted to listen again to see if I could figure it out without being primed.

This is the question I tried to answer elsewhere - After training on 4 or 5 samples I was able to hear the words in the remainder of the coded sentences the first time I heard them, without being primed by the decoded version of those sentences.

reads in more detail indeed - thanks!

How about the other vocoded samples?

Thanks for the report anyway, that's interesting to know.

For people wanting different recordings of the garbled/non-garbled: it's right on the page right above the one Morendil linked to.

On the next sample, I only caught the last few words on the first play (of the garbled version only), and after five plays still got a word wrong. On the third, I only got two words the first time, and additional replays made no difference. On the fourth, I got half after one play, and most after two. On the fifth, I got the entire thing on the first play. (I'm not feeling as clear-headed today as I was the other day, but it didn't feel like a learning effect.) On some of them, I don't believe that even with a lot of practice I could ever get it all right, since some garbled words sound more like other plausible words than they do the originals.

Thinking about it more, it's a bit surprising that I did well. I generally have trouble making out speech in situations where other people don't have quite as much trouble. I'll often turn on subtitles in movies, even in my first language/dialect (American English). (In fact, I hate movies where the speech is occasionally muffled and there are no subtitles--two things that tend to go hand in hand with smaller production budgets.) OTOH, I have a good ear in general. I've had a lot of musical training, and I've worked with sound editing quite a bit.