Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Do We Believe Everything We're Told?

33 Post author: Eliezer_Yudkowsky 10 October 2007 11:52PM

Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively "busy" by asking them to keep a lookout for "5" in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors.  Most of the experiments seemed to bear out the idea that cognitive busyness increased anchoring, and more generally contamination.

Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging:  Do we believe everything we're told?

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it.  This obvious-seeming model of cognitive process flow dates back to Descartes.  But Descartes's rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y'know, logical and intuitive.  But Gilbert saw a way of testing Descartes's and Spinoza's hypotheses experimentally.

If Descartes is right, then distracting subjects should interfere with both accepting true statements and rejecting false statements.  If Spinoza is right, then distracting subjects should cause them to remember false statements as being true, but should not cause them to remember true statements as being false.

Gilbert, Krull, and Malone (1990) bears out this result, showing that, among subjects presented with novel statements labeled TRUE or FALSE, distraction had no effect on identifying true propositions (55% success for uninterrupted presentations, vs. 58% when interrupted); but did affect identifying false propositions (55% success when uninterrupted, vs. 35% when interrupted).

A much more dramatic illustration was produced in followup experiments by Gilbert, Tafarodi and Malone (1993).  Subjects read aloud crime reports crawling across a video monitor, in which the color of the text indicated whether a particular statement was true or false.  Some reports contained false statements that exacerbated the severity of the crime, other reports contained false statements that extenuated (excused) the crime.  Some subjects also had to pay attention to strings of digits, looking for a "5", while reading the crime reports—this being the distraction task to create cognitive busyness.  Finally, subjects had to recommend the length of prison terms for each criminal, from 0 to 20 years.

Subjects in the cognitively busy condition recommended an average of 11.15 years in prison for criminals in the "exacerbating" condition, that is, criminals whose reports contained labeled false statements exacerbating the severity of the crime.  Busy subjects recommended an average of 5.83 years in prison for criminals whose reports contained labeled false statements excusing the crime.  This nearly twofold difference was, as you might suspect, statistically significant.

Non-busy participants read exactly the same reports, with the same labels, and the same strings of numbers occasionally crawling past, except that they did not have to search for the number "5".  Thus, they could devote more attention to "unbelieving" statements labeled false.  These non-busy participants recommended 7.03 years versus 6.03 years for criminals whose reports falsely exacerbated or falsely excused.

Gilbert, Tafarodi and Malone's paper was entitled "You Can't Not Believe Everything You Read".

This suggests —to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we're doing something else at the time.  Be careful when you glance at that newspaper in the supermarket.

PS:  According to an unverified rumor I just made up, people will be less skeptical of this blog post because of the distracting color changes.

 

Part of the Seeing With Fresh Eyes subsequence of How To Actually Change Your Mind

Next post: "Cached Thoughts"

Previous post: "Priming and Contamination"


Gilbert, D. 2002. Inferential correction. In Heuristics and biases: The psychology of intuitive judgment.  You recognize this citation by now, right?

Gilbert, D., Krull, D. and Malone, P. 1990. Unbelieving the unbelievable: Some problems in the rejection of false information.  Journal of Personality and Social PSychology, 59(4), 601-613.

Gilbert, D., Tafarodi, R. and Malone, P. 1993. You can't not believe everything you read. Journal of Personality and Social Psychology, 65(2), 221-233.

Comments (32)

Sort By: Old
Comment author: Nick_Tarleton 11 October 2007 12:19:40AM 3 points [-]

"Some reports contained false statements that exacerbated the severity of the crime"

Should "false" be highlighted here?

This is scary.

Comment author: Eliezer_Yudkowsky 11 October 2007 12:35:08AM 0 points [-]

Nick, fixed.

Comment author: Constant2 11 October 2007 12:40:48AM 14 points [-]

Spinoza's view seems on the face of it much more likely than Descartes's, because it is much easier to implement. Anyone who has programmed knows that the easiest way to write a program to deal with an input is just to accept it, and that a check can be computationally expensive. Furthermore, how is one to understand a sentence without at least modeling the belief that the sentence is intended to elicit, so that one might at least understand what it means (the sentence itself is merely a character/phoneme string and so does not yield meaning intrinsically), and the obvious and readily available way to model such a belief is to actually enter it. Much easier simply to enter into that actual brain state associated with the belief and add maybe a flag to mark it as nonserious, than to enter into a wholly different state. We may infer from child studies that the higher order skill of contemplating a belief without holding it is not immediately acquired, for it is only at age 4 or so (I think) that a child is able to understand that others have beliefs that differ from reality.

Comment author: Michael_Rooney 11 October 2007 01:53:20AM 2 points [-]

Did you just believe that Descartes was modeling "cognitive-process flow" because some psychologist told you so? Or is possible that Descartes was, y'know, prescribing how rationalists should approach belief, rather than how we generally do?

Comment author: gwern 14 May 2013 01:46:09AM *  4 points [-]

No, it's not possible, as one would know if one had 'just', 'y'know', looked up the citations in the papers and read what Descartes himself said in his Fourth Meditation:

Whereupon, regarding myself more closely, and considering what my errors are (which alone testify to the existence of imperfection in me), I observe that these depend on the concurrence of two causes, viz, the faculty of cognition, which I possess, and that of election or the power of free choice,—in other words, the understanding and the will. For by the understanding alone, I [neither affirm nor deny anything but] merely apprehend (percipio) the ideas regarding which I may form a judgment; nor is any error, properly so called, found in it thus accurately taken.

...the power of will consists only in this, that we are able to do or not to do the same thing (that is, to affirm or deny, to pursue or shun it), or rather in this alone, that in affirming or denying, pursuing or shunning, what is proposed to us by the understanding, we so act that we are not conscious of being determined to a particular action by any external force.

Seems pretty clearly descriptive and not normative... no 'should' about it.

Comment author: AnneC 11 October 2007 03:52:40AM 1 point [-]

Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

That sounds like what Sam Adams was saying at the Singularity Summit -- the idea of "superstition" being essential to learning in some respects.

Comment author: Jeremy_McKibben-Sanders 11 October 2007 04:45:03AM 2 points [-]

This reminds me of a proof I was working on the other day. I was trying to show that a proposition (c) is true, so I used the following argument.

If (1) is true, then either (a) is true or (c) is true. If (2) is true, then either (b) is true or (c) is true. (a) and (b) cannot both be true. (1) and (2) are true, so therefore (c) must be true.

This seems to follow Descartes' model of consideration and then acceptance of the proposition (c). However, I could have saved myself about half a page of space if I had simply started out by rejecting (c) and then waiting for a contradiction to "appear."

Of course this is quite the opposite of the Spinoza model, but like Constant said, it makes sense that you can save time and brain power by actively modeling a belief and then seeing what follows. As for why acceptance is the default, I'm not exactly sure. Perhaps it is simply quicker to accept a proposition rather than to waste time looking for its opposite.

Comment author: Anna3 11 October 2007 05:13:26AM 1 point [-]

So doesn't this tie in well with your previous article about the denier's dilemma? It seems, if Gilbert/Spinoza are right, that the CDC mythbusters problem of people mis-remembering as "true" the myths presented by the CDC, is an example of this mechanism (strengthened by reinforcement effects of re-encountering the myth).

Comment author: Hugo_Mercier 11 October 2007 01:23:21PM 3 points [-]

I would just like to point out that this paper: http://www.blackwell-synergy.com/doi/abs/10.1111/j.0956-7976.2005.01576.x titled 'believe it or not' claims to refute the strongest of Gilbert's ideas (and rightly so in my view)

Comment author: Sebastian_Hagen2 11 October 2007 04:59:25PM 5 points [-]

One of the most obvious examples of commonly encountered unreliable information are advertisements. Gilbert's results suggest that knowing that the information in advertisements is highly unreliable doesn't make you immune to their effects. This suggests that it's a good idea to avoid perceiving advertisements entirely, especially in situations where you're trying to concentrate on something else. The obvious way to do this is to aggressively use ad-blockers wherever possible; unfortunately there are still media where this isn't practical.

Comment author: Kaj_Sotala 11 October 2007 06:35:52PM 4 points [-]

What about statements that are so loaded to their listeners that they're rejected outright, with seemingly no consideration? Are they subject to the same process (and have such outrageous implications that they're rejected at once), or do they work differently?

Comment author: Constant2 11 October 2007 07:57:58PM 1 point [-]

Contrary to what many seem to believe, I consider advertising to be one of the least harmful sources of unreliable information. For one thing, the cacophony of advertisements send us contradictory messages. "Buy my product." "No, buy my product." One might argue that even such contradictory messages have a common element: "buy something". However, I have not noticed that I spend less money now that I hardly ever put myself at the mercy of television advertising, so I have serious doubts about whether advertising genuinely increases a person's overall spending. I notice, also, that I do not smoke, even though I have seen plenty of advertisements for particular brands of cigarettes. The impact of all those cigarette advertisements on my overall spending on cigarettes has evidently been minimal.

For another, the message itself seems not all that harmful in most cases. For example, suppose that advertising is ultimately the reason that I buy Tide detergent rather than another brand of detergent. How much am I harmed by this? The detergents all do pretty much the same thing.

And in many specific cases, where people's behavior has been blamed on the nefarious influence of advertising, what I generally see is that the accuser has curiously neglected some alternative, very likely explanations. Smoking is attractive because it delivers a drug. Smoking was popular long before it was advertised. I suspect that no more than a very small fraction of smokers started smoking because of advertising.

Comment author: TGGP4 11 October 2007 09:00:29PM 5 points [-]

I have heard that advertising mainly shifts consumers from one brand to another. In that sense it is wasteful and an economist could give an argument for taxing it. I happen to like the subsidy of media by advertisements, so I wouldn't advocate it.

Comment author: Nancy_Lebovitz 12 October 2007 12:44:59AM 8 points [-]

If people are that much more trusting when they're distracted, then it's important not to multi-task if you need to evaluate what you're looking at. Maybe it's just important to not multi-task.

Comment author: Nick_Tarleton 12 October 2007 02:04:48AM 3 points [-]

In addition to advertisements, should we avoid fiction when we're distracted?

Comment author: taryneast 20 February 2011 11:45:16AM 3 points [-]

A good question (though I suspect the simple answer is "no").

It also brings up the question of whether this is why we generalise from fictional evidence so often.

Comment author: nick2 13 October 2007 12:07:48AM 0 points [-]

"Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration."

Whether this view is more accurate than DesCartes' view depends on whether the belief in question is already commonly accepted. When in the typical situation a typical person Bob says "X is Y, therefore I will perform act A" or "X should be Y, therefore we should perform act A", Bob is not making a statement about X or Y, he is making a statement about himself. All the truth or reality that is required for Bob to signal his altruism is that it be probable that he believes that X is Y or that X should be Y. The probability of this belief depends far more on what else Bob and his peers believe than it does about the reality or truth of "X is Y".

Comment author: Gordon_Worley 13 October 2007 01:40:17PM 7 points [-]

Between teaching mathematics to freshmen and spending most of my time learning mathematics, I've noticed this myself. When presented with a new result, the first inclination, especially depending on the authority of the source, is to believe it and figure there's a valid proof of it. But occasionally the teacher realizes that they made a mistake and may even scold the students for not noticing since it is incredibly obvious (e.g. changing something like ||z - z_0|| to ||z - z_1|| between steps, even though a few seconds thinking reveals it to be a typo rather than a mathematical insight).

Sometimes (and for a few lucky people, most of the time) individuals are in a mental state where they are actively thinking through everything being presented to them. For me, this happens a few times a semester in class, and almost always during meetings with my advisor. And occasionally I have a student who does it when I'm teaching. But in my experience this is a mentally exhausting task and often leaves you think-dead for a while afterwards (I find I can go about 40 minutes before I give out).

All this leads me to a conclusion, largely from my experience with what behavior produces what effects, that in mathematics the best way to teach is to assign problems and give students clues when they get stuck. The problems assigned, of course, should be ones that result in the student building up the mathematical theory. It's certainly more time consuming, but in the end more rewarding, in terms of both emotional satisfaction and understanding.

Comment author: Elizabeth 28 November 2010 06:38:25AM 7 points [-]

As someone who spends a lot of time on the student side of those math classes (and as the student in the class who almost always catches those typographical errors), I suspect that there are students who notice the error but don't comment for social reasons (don't want to interrupt, don't want to be a know-it-all, don't want to be publicly erroneous in a correction, etc.). Your solution of giving students problems, while an excellent teaching tool, is not a particularly good test for this phenomenon because it fails to distinguish between students who really do miss the errors because they assume you are right and the students who noticed but didn't speak up, or those who simply weren't paying attention in the first place.

Comment author: taryneast 20 February 2011 11:48:57AM 5 points [-]

I agree. I think the social-pressure aspect is even more exaggerated in business settings where there are not only no rewards for pointing out errors, but where you are often actively chastised for causing a team-member to lose face.

Comment author: NancyLebovitz 20 February 2011 12:42:38PM 4 points [-]

'Nuff said

This was put up approvingly by two people on my friendslist.

Comment author: taryneast 20 February 2011 02:47:51PM *  4 points [-]

Brilliant blogpost, and quite correct.

There are certainly situations in which the pointing out of errors is not socially appropriate, and doesn't win you any friends.

When somebody's telling a joke or an interesting anecdote, you'll often find that nobody cares if the premises are correct. You'll tend to get along better if you bite your tongue - even if it is the 500th time you've heard that "you only use 10% of your brain" (for instance).

However... I do tend to find that getting along with people that don't want to know the truth is more energy-draining (for me)... just as I'm sure that if I let my own natural preference for truth take over... I'd be draining for them.

I find that "getting along with non-rational/truth-preferring people" is a tough skill... and involves a lot of compromise.

I'd love to see more articles on how to do this successfully (without going insane or compromising your values).

Also I'd like to point out that there really are situations in which you really do have to point out that somebody is just plain wrong... despite how uncomfortable it makes the other person feel.

That while the article is quite right that being patronising is not beneficial... there are many situations where "being right" is not about being patronising, but about making sure all the bases are covered.

This is often where IT-people clash with people such as their managers. Because really, sometimes code just can't do what they're asking, no matter how much they'd like us to "put on a can-do attitude".

Similarly, clients can give ambiguous or flat-out contradictory requirements... and these errors must be pointed out, regardless of whether the person loses face by doing so. because IT have to make a profit just as much as the client does, and these kinds of errors are where later disputes arise. Nipping it in the bud by pointing out they're wrong is the best thing for your long-term survivability here.

Of course - there are ways and means of doing so to make sure that egos aren't bruised int he process... but that's another article (or two), I'm sure. :)

Comment author: Omegaile 02 April 2013 06:12:18PM 0 points [-]

I think the blog post was basically speaking in favor of the charity principle.

Comment author: taryneast 10 April 2013 11:20:25PM 1 point [-]

I don't think I agree on that one.

The article isn't about choosing the reinterpret the other person's statements in a more favourable light.

It's about not sweating the small stuff and not drawing attention your way and letting somebody else have fun without ruining it with detail that, in this social situation is not actually necessary.

Comment author: Tim_Freeman 28 April 2008 12:01:24PM 1 point [-]

Hugo Mercier's citation above for "Believe it or Not" by Hasson et al. wants money to give you the article. The article is available for free from Hasson's home page at:

http://home.uchicago.edu/~uhasson/

The direct URL is:

http://home.uchicago.edu/~uhasson/Belief.pdf

Comment author: bigjeff5 15 February 2011 11:08:37PM *  1 point [-]

Update:

Hasson's home page: http://hasson.org

Direct URL for paper: http://www.behaviometrix.com/public_html/Hasson.belief.pdf

Comment author: Doug3 21 October 2008 06:22:28PM 3 points [-]

I, for one, found the color changing text completely persuasive.

Comment author: MarkusRamikin 24 June 2011 02:07:27PM 4 points [-]

Funny, my brain just assumed it's all broken hyperlinks or something, and until the PS I didn't consciously realize there were any colors in the article.

Comment author: Martok 05 April 2012 10:33:36PM 0 points [-]

Me too, but I fear I may be primed to believing Eliezer as his previous posts contained stuff that I heard about before, granting him some advantage. Or it may be Authority...

Anyway: I find it interesting that a german newspaper mostly known for being the lowest form of journalism imaginable (but still highest-grossing) uses a similar technique in their "articles": they print more or less randomly chosen fragments in bold or italics. Could using confusing fonts really be enough to get people to "believe everything"?

Something else I noticed: all highlighted phrases in this article are negative. This may have primed against the postive effects here. Somebody should test this.

Comment author: roland 02 June 2009 06:48:15PM 0 points [-]

I'm amazed that Spinoza got it right at that time.

Comment author: diegocaleiro 05 March 2010 05:57:30AM 3 points [-]

There are some millions of pages written by old philosophers, sure people can find true stuff that they guessed. This does not mean we should be amazed. We are not having available at the moment we become amazed the non-amazing fact that Spinoza made 2367 mistakes in his written life. I'm as amazed by Spinoza as I am amazed by Nostradamus. It is not zero, but it wouldn't pay a book.

Comment author: mat33 08 October 2011 02:52:50AM 2 points [-]

Well, no modern dictator I know off understimates mass-media.

And basic rights and freedoms, where they do work at all, do tend to work against excluding your opponents as information source of the majority.