Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Priming and Contamination

20 Post author: Eliezer_Yudkowsky 10 October 2007 02:23AM

Suppose you ask subjects to press one button if a string of letters forms a word, and another button if the string does not form a word.  (E.g., "banack" vs. "banner".)  Then you show them the string "water".  Later, they will more quickly identify the string "drink" as a word.  This is known as "cognitive priming"; this particular form would be "semantic priming" or "conceptual priming".

The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.

Priming also reveals the massive parallelism of spreading activation: if seeing "water" activates the word "drink", it probably also activates "river", or "cup", or "splash"... and this activation spreads, from the semantic linkage of concepts, all the way back to recognizing strings of letters.

Priming is subconscious and unstoppable, an artifact of the human neural architecture.  Trying to stop yourself from priming is like trying to stop the spreading activation of your own neural circuits.  Try to say aloud the color—not the meaning, but the color—of the following letter-string:  "GREEN"

In Mussweiler and Strack (2000), subjects were asked the anchoring question:  "Is the annual mean temperature in Germany higher or lower than 5 Celsius / 20 Celsius?"  Afterward, on a word-identification task, subjects presented with the 5 Celsius anchor were faster on identifying words like "cold" and "snow", while subjects with the high anchor were faster to identify "hot" and "sun".  This shows a non-adjustment mechanism for anchoring: priming compatible thoughts and memories.

The more general result is that completely uninformative, known false, or totally irrelevant "information" can influence estimates and decisions.  In the field of heuristics and biases, this more general phenomenon is known as contamination.  (Chapman and Johnson 2002.)

Early research in heuristics and biases discovered anchoring effects, such as subjects giving lower (higher) estimates of the percentage of UN countries found within Africa, depending on whether they were first asked if the percentage was more or less than 10 (65).  This effect was originally attributed to subjects adjusting from the anchor as a starting point, stopping as soon as they reached a plausible value, and under-adjusting because they were stopping at one end of a confidence interval.  (Tversky and Kahneman 1974.)

Tversky and Kahneman's early hypothesis still appears to be the correct explanation in some circumstances, notably when subjects generate the initial estimate themselves (Epley and Gilovich 2001).  But modern research seems to show that most anchoring is actually due to contamination, not sliding adjustment.  (Hat tip for Unnamed for reminding me of this—I'd read the Epley/Gilovich paper years ago, as a chapter in Heuristics and Biases, but forgotten it.)

Your grocery store probably has annoying signs saying "Limit 12 per customer" or "5 for $10".  Are these signs effective at getting customers to buy in larger quantities?  You probably think you're not influenced.  But someone must be, because these signs have been shown to work, which is why stores keep putting them up.  (Wansink et. al. 1998.)

Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias.  Once an idea gets into your head, it primes information compatible with it—and thereby ensures its continued existence.  Never mind the selection pressures for winning political arguments; confirmation bias is built directly into our hardware, associational networks priming compatible thoughts and memories.  An unfortunate side effect of our existence as neural creatures. 

A single fleeting image can be enough to prime associated words for recognition.  Don't think it takes anything more to set confirmation bias in motion.  All it takes is that one quick flash, and the bottom line is already decided, for we change our minds less often than we think...

 

Part of the Seeing With Fresh Eyes subsequence of How To Actually Change Your Mind

Next post: "Do We Believe Everything We're Told?"

Previous post: "Anchoring and Adjustment"


Chapman, G.B. and Johnson, E.J. 2002. Incorporating the irrelevant: Anchors in judgments of belief and value. In Gilovich et. al. (2003).

Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.

Mussweiler, T. and Strack, F.  Comparing is believing: a selective accessibility model of judgmental anchoring.  European Review of Social Psychology, 10, 135-167.

Tversky, A. and Kahneman, D. 1974. Judgment under uncertainty: Heuristics and biases. Science, 185: 251-284.

Wansink, B., Kent, R.J. and Hoch, S.J. 1998. An Anchoring and Adjustment Model of Purchase Quantity Decisions. Journal of Marketing Research, 35(February): 71-81.

Comments (25)

Sort By: Old
Comment author: Venu 10 October 2007 03:28:08AM 3 points [-]

"Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias. Once an idea gets into your head, it primes information compatible with it - and thereby ensures its continued existence."

I am not sure I understand this. Once an idea gets into my head, my brain should prime all information *related* to the idea, not just information that is compatible with the idea. I am of course not denying the existence of confirmation bias, just trying to understand how priming in particular can promote it.

Comment author: Eliezer_Yudkowsky 10 October 2007 04:09:05AM 5 points [-]

Once an idea gets into my head, my brain should prime all information *related* to the idea, not just information that is compatible with the idea.

Because the terrifying truth is that compatible information is primed much more strongly than contrary information. Both are logically related, yes; but the brain is not, in that aspect, logical. It should be, but it isn't. If someone asks you whether the average temperature in Germany is more or less than 5 degrees Celsius, "cold" is primed more than "hot". That is just how our brain sorta-works.

Comment author: Aaron3 10 October 2007 04:35:18AM 0 points [-]

What can we do about this? Can we reduce the effects of contamination by consciously avoiding contaminating input before making an important decision? Or does consciously avoiding it contaminate us?

Comment author: savagehenry 10 October 2007 05:19:05AM 7 points [-]

I had to look at the html source where you said "Try to say aloud the color - not the meaning, but the color - of the following letter-string: "GREEN"" because I'm colorblind and I couldn't tell what color it was. Small amounts of red or green appear to be BOTH red and green simultaneously haha (show me a giant field of green and I can tell it's green most of the time, but show me a dot of green on a field of white and I have no clue, same with red). I guess that really isn't relevant to anything said here, I just thought it was funny considering the point of the exercise.

Comment author: danlowlite 27 October 2010 02:30:38PM 3 points [-]

Same here. I had to look at the HTML source for the color code: #ff3300. But I figured that it wasn't green before I looked, because I guess I had been primed to expect it not to be the case. At least I think I did.

Comment author: taryneast 20 February 2011 11:15:01AM 5 points [-]

Yeah. Somebody should change it to Blue. Blue-Yellow colour-blindess is far more rare than red-green, so more people would "get" the example ;)

Comment author: FiftyTwo 21 April 2011 09:47:34PM 3 points [-]

Same here. Though the fact that I initially thought it was green, then managed to resolve it as red is probably a good example of priming in itself.

Comment author: Anders_Sandberg 10 October 2007 01:59:24PM 1 point [-]

It appears that priming can be reduced by placing words into a context: priming for words previously seen in a text (or even a nonsense jumble) is weaker than when seen individually.

Comment author: Adirian 10 October 2007 08:03:20PM 5 points [-]

Is it a statistical artifact, however, or a genuine intellectual one? That is, those who genuinely have no clue whatsoever in regard to the number of UN nations in Africa might take information about it as a weak sort of evidence - I don't know, so I'll go with a figure I've encountered that is associated with this question. Similarly, someone who is not familiar with pricing may see a "Limit 12" and believe, because of the presence of the sign, that the pricing - regardless of what it is, because they don't have comparative information - is extremely good.

Which is to say, your examples may come from subject-matter ignorance rather than priming, and conceptual priming may not be quite as contaminative as these studies suggest.

Comment author: TGGP4 10 October 2007 08:28:05PM 3 points [-]

Adrian, priming still works even if subjects see the number came out of Wheel-of-fortune type random outcomes.

Comment author: Adirian 11 October 2007 08:47:46PM -1 points [-]

Which still doesn't say anything about the impact of priming on an individual's decision-making process regarding a matter they are well-informed on - because weak correlation is still better than no correlation.

Comment author: MrHen 09 February 2010 06:00:20PM 8 points [-]

Another practical example of this: When asking for ideas don't give examples of the ideas. Today I asked someone for a list of various non-mammal animal prints. For clarification I used the examples of bird feathers and monarch butterflies. But I had already thought of those and was looking for more. It took a little while to get feathers, butterflies, and mammals out of her head. Once we had moved on, I got some great answers, but the beginning was tricky.

The annoying part for me was that I wouldn't have spent any more time by just asking for animal prints and after she thought of mammals telling her, "We got those already, what else do you have?" Of course, I realized this one sentence too late. Ah well.

Comment author: ciphergoth 09 February 2010 06:04:08PM 9 points [-]

"What's their house number? Is it number 73?" <- never do this!

Comment author: MrHen 09 February 2010 06:13:45PM *  11 points [-]

Yep. This is even more obvious with kids. Asking "What happened?" is much more likely to result in the truth than asking, "What happened? Did you hit him?"

Or, "How old are you?" versus, "How old are you? Are you five?"

On the other hand, if you want to use this to your advantage, you can ask, "Do you want fries with that?" Relatedly, a server friend of mine has noticed that the easiest way to get higher tabs is to start nodding when asking if they want extras.

If you look for this behavior in interviews you will do much better. It is surprising at how much the people interviewing you want you to succeed and how often they will prime the answers to the questions they are asking. (Or not, I guess, considering if you succeed than they take that as you are a valuable asset to their company...)

Comment author: Aryn 17 January 2011 03:47:25AM *  7 points [-]

While I respect priming and contamination as a bias, I think you've overdramaticized it in this article. Similar exaggerations of scientific findings for shock purposes has up until recently made me paranoid of attacks on my decision making process, and not just cognitive bias either. In fact, this being before I read LW, I don't think I even considered cognitive biases other than what you call contamination here, and it still seriously screwed me up emotionally and socially.

So yes, concepts will cause someone to think of related, maybe compatible concepts. No, this is not mind control, and no, a flashed image on the screen will not rewrite all your utility functions, make you a paperclip maximizer, and kill your dog.

Comment author: trlkly 24 April 2012 10:43:15AM 0 points [-]

Thank you. I started to feel like I was reading the patter of a Darren Brown act.

Comment author: Kenny 03 January 2013 02:48:00AM 4 points [-]

Re-reading this post just now, I find it funny that I thought your comment over-dramatized, and much more than the post itself.

It's almost like you've been primed to think of rewritten utility functions and paperclip-maximizers by something in this post other than its explicit contents.

Comment author: MKani 13 August 2011 10:33:08PM 2 points [-]

I first heard of cognitive priming on a TED talk where a guy from Skeptic magazine was explaining 'pseudoscience and weird beliefs'. They played a popular song backward, most of the audience couldn't hear anything that sounded like words. But when the supposed 'lyrics' of the backward song were put on the screen, everyone could clearly hear the words 'satan' and '666' and entire sentences that were supposedly there. It was easy to hear once we were 'primed' for it, even though normally no one would have heard anything but gibberish.

Comment author: satt 14 August 2011 06:03:15PM 1 point [-]

Sounds a lot like Simon Singh's demonstration with "Stairway to Heaven".

Comment author: gjm 22 September 2011 07:42:11AM 3 points [-]

Much the same trick can work, of course, with a song played forwards that has (entirely different) words. Here's one particularly nice example.

Comment author: AspiringRationalist 09 March 2013 03:03:28AM 0 points [-]

I wonder whether that would have worked with better sound quality. I listened to it once without looking at the subtitles, and I couldn't understand a word.

Comment author: gwern 13 August 2011 11:12:59PM 2 points [-]
Comment author: Vladimir_Nesov 13 August 2011 11:31:04PM 0 points [-]

Fixed the link.

Comment author: mat33 08 October 2011 02:39:54AM 0 points [-]

"Yet the most fearsome aspect of contamination is that it serves as yet another of the thousand faces of confirmation bias."

A horrible thing, if you look at it, as on the part of the cognition process of an [individual] ant. (Not that there is a lot of cognition expected to go on in the head of a single ant). And some usufull insights in the cognitive process of the anthill, as the whole - if you but try to look at it from another angle.

Our subcultures - actually do some cognition. They make something done. They do come up with some workable models of the real world. Then, we tend to attribute some label (say, "Newton") to the resaults... without going into all that complexity contained in that particular subculture.

http://mat33.livejournal.com/716213.html?thread=683189#t683189

Comment author: hannahelisabeth 09 November 2012 08:58:22PM 2 points [-]

The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.

I would not expect this to take place before deliberating on a word's meaning. Think about it. How would you know if a string of letters is a word? If it corresponds to a meaning. Thus you have to search for a meaning in order to determine if the string of letters is a word. If it were a string of letters like alskjdfljasdfl, it would be obvious sooner, since it's unpronouncable and visually jarring, but something like "banack" could be a word, if it only had a meaning attached to it. So you have to check to see if there is a meaning there. So it doesn't seem all that strange to me that if you prime the neural pathways of a word's meaning, you'd recognize it as a word sooner.