Contrary to what many seem to believe, I consider advertising to be one of the least harmful sources of unreliable information. For one thing, the cacophony of advertisements send us contradictory messages. "Buy my product." "No, buy my product." One might argue that even such contradictory messages have a common element: "buy something". However, I have not noticed that I spend less money now that I hardly ever put myself at the mercy of television advertising, so I have serious doubts about whether advertising genuinely increases a person's overall spending. I notice, also, that I do not smoke, even though I have seen plenty of advertisements for particular brands of cigarettes. The impact of all those cigarette advertisements on my overall spending on cigarettes has evidently been minimal.
For another, the message itself seems not all that harmful in most cases. For example, suppose that advertising is ultimately the reason that I buy Tide detergent rather than another brand of detergent. How much am I harmed by this? The detergents all do pretty much the same thing.
And in many specific cases, where people's behavior has been blamed on the nefarious influence of advertising, what I generally see is that the accuser has curiously neglected some alternative, very likely explanations. Smoking is attractive because it delivers a drug. Smoking was popular long before it was advertised. I suspect that no more than a very small fraction of smokers started smoking because of advertising.
Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively “busy” by asking them to keep a lookout for “5” in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors. Most of the experiments seemed to bear out the idea that being cognitive busy increased anchoring, and more generally contamination.
Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging: Do we believe everything we’re told?
One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes’s rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.
Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y’know, logical and intuitive.1 But Gilbert saw a way of testing Descartes’s and Spinoza’s hypotheses experimentally.
If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements. If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.
Gilbert, Krull, and Malone bear out this result, showing that, among subjects presented with novel statements labeled true or false, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).2
A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi, and Malone.2 Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false. Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime. Some subjects also had to pay attention to strings of digits, looking for a “5,” while reading the crime reports—this being the distraction task to create cognitive busyness. Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.
Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the “exacerbating” condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime. Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime. This nearly twofold difference was, as you might suspect, statistically significant.
Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number “5.” Thus, they could devote more attention to “unbelieving” statements labeled false. These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.
Gilbert, Tafarodi, and Malone’s paper was entitled “You Can’t Not Believe Everything You Read.”
This suggests—to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we’re doing something else at the time. Be careful when you glance at that newspaper in the supermarket.
PS: According to an unverified rumor I just made up, people will be less skeptical of this essay because of the distracting color changes.
1See Robin Hanson, “Policy Tug-O-War,” Overcoming Bias (blog), 2007, http://www.overcomingbias.com/2007/05/policy_tugowar.html.
2Daniel T. Gilbert, Douglas S. Krull, and Patrick S. Malone, “Unbelieving the Unbelievable: Some Problems in the Rejection of False Information,” Journal of Personality and Social Psychology 59 (4 1990): 601–613.
3Daniel T. Gilbert, Romin W. Tafarodi, and Patrick S. Malone, “You Can’t Not Believe Everything You Read,” Journal of Personality and Social Psychology 65 (2 1993): 221–233.