The Affect Heuristic

35 Eliezer_Yudkowsky 27 November 2007 07:58AM

The affect heuristic is when subjective impressions of goodness/badness act as a heuristic—a source of fast, perceptual judgments.  Pleasant and unpleasant feelings are central to human reasoning, and the affect heuristic comes with lovely biases—some of my favorites.

Let's start with one of the relatively less crazy biases.  You're about to move to a new city, and you have to ship an antique grandfather clock.  In the first case, the grandfather clock was a gift from your grandparents on your 5th birthday.  In the second case, the clock was a gift from a remote relative and you have no special feelings for it.  How much would you pay for an insurance policy that paid out $100 if the clock were lost in shipping?  According to Hsee and Kunreuther (2000), subjects stated willingness to pay more than twice as much in the first condition.  This may sound rational—why not pay more to protect the more valuable object?—until you realize that the insurance doesn't protect the clock, it just pays if the clock is lost, and pays exactly the same amount for either clock.  (And yes, it was stated that the insurance was with an outside company, so it gives no special motive to the movers.)

All right, but that doesn't sound too insane.  Maybe you could get away with claiming the subjects were insuring affective outcomes, not financial outcomes—purchase of consolation.

Then how about this?  Yamagishi (1997) showed that subjects judged a disease as more dangerous when it was described as killing 1,286 people out of every 10,000, versus a disease that was 24.14% likely to be fatal.  Apparently the mental image of a thousand dead bodies is much more alarming, compared to a single person who's more likely to survive than not.

But wait, it gets worse.

continue reading »

Self-Anchoring

25 Eliezer_Yudkowsky 22 October 2007 06:11AM

Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs.  The child sees a box, sees candy in the box, and sees that Sally sees the box.  Sally leaves, and then the experimenter, in front of the child, replaces the candy with pencils and closes the box so that the inside is not visible.  Sally returns, and the child is asked what Sally thinks is in the box.  Children younger than 3 say "pencils", children older than 4 say "candy".

Our ability to visualize other minds is imperfect.  Neural circuitry is not as flexible as a program fed to a general-purpose computer.  An AI, with fast read-write access to its own memory, might be able to create a distinct, simulated visual cortex to imagine what a human "sees".  We humans only have one visual cortex, and if we want to imagine what someone else is seeing, we've got to simulate it using our own visual cortex - put our own brains into the other mind's shoes.  And because you can't reconfigure memory to simulate a new brain from stratch, pieces of you leak into your visualization of the Other.

continue reading »

Illusion of Transparency: Why No One Understands You

57 Eliezer_Yudkowsky 20 October 2007 11:49PM

In hindsight bias, people who know the outcome of a situation believe the outcome should have been easy to predict in advance.  Knowing the outcome, we reinterpret the situation in light of that outcome.  Even when warned, we can't de-interpret to empathize with someone who doesn't know what we know.

Closely related is the illusion of transparency:  We always know what we mean by our words, and so we expect others to know it too.  Reading our own writing, the intended interpretation falls easily into place, guided by our knowledge of what we really meant.  It's hard to empathize with someone who must interpret blindly, guided only by the words.

June recommends a restaurant to Mark; Mark dines there and discovers (a) unimpressive food and mediocre service (b) delicious food and impeccable service.  Then Mark leaves the following message on June's answering machine:  "June, I just finished dinner at the restaurant you recommended, and I must say, it was marvelous, just marvelous."  Keysar (1994) presented a group of subjects with scenario (a), and 59% thought that Mark's message was sarcastic and that Jane would perceive the sarcasm.  Among other subjects, told scenario (b), only 3% thought that Jane would perceive Mark's message as sarcastic.  Keysar and Barr (2002) seem to indicate that an actual voice message was played back to the subjects.

continue reading »

Hold Off On Proposing Solutions

45 Eliezer_Yudkowsky 17 October 2007 03:16AM

From pp. 55-56 of Robyn Dawes's Rational Choice in an Uncertain World.  Bolding added.

Norman R. F. Maier noted that when a group faces a problem, the natural tendency of its members is to propose possible solutions as they begin to discuss the problem.  Consequently, the group interaction focuses on the merits and problems of the proposed solutions, people become emotionally attached to the ones they have suggested, and superior solutions are not suggested.  Maier enacted an edict to enhance group problem solving: "Do not propose solutions until the problem has been discussed as thoroughly as possible without suggesting any."  It is easy to show that this edict works in contexts where there are objectively defined good solutions to problems.

continue reading »

The Logical Fallacy of Generalization from Fictional Evidence

38 Eliezer_Yudkowsky 16 October 2007 03:57AM

When I try to introduce the subject of advanced AI, what's the first thing I hear, more than half the time?

"Oh, you mean like the Terminator movies / the Matrix / Asimov's robots!"

And I reply, "Well, no, not exactly.  I try to avoid the logical fallacy of generalizing from fictional evidence."

continue reading »

Do We Believe Everything We're Told?

36 Eliezer_Yudkowsky 10 October 2007 11:52PM

Some early experiments on anchoring and adjustment tested whether distracting the subjects—rendering subjects cognitively "busy" by asking them to keep a lookout for "5" in strings of numbers, or some such—would decrease adjustment, and hence increase the influence of anchors.  Most of the experiments seemed to bear out the idea that cognitive busyness increased anchoring, and more generally contamination.

Looking over the accumulating experimental results—more and more findings of contamination, exacerbated by cognitive busyness—Daniel Gilbert saw a truly crazy pattern emerging:  Do we believe everything we're told?

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it.  This obvious-seeming model of cognitive process flow dates back to Descartes.  But Descartes's rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

Over the last few centuries, philosophers pretty much went along with Descartes, since his view seemed more, y'know, logical and intuitive.  But Gilbert saw a way of testing Descartes's and Spinoza's hypotheses experimentally.

continue reading »

Priming and Contamination

22 Eliezer_Yudkowsky 10 October 2007 02:23AM

Suppose you ask subjects to press one button if a string of letters forms a word, and another button if the string does not form a word.  (E.g., "banack" vs. "banner".)  Then you show them the string "water".  Later, they will more quickly identify the string "drink" as a word.  This is known as "cognitive priming"; this particular form would be "semantic priming" or "conceptual priming".

The fascinating thing about priming is that it occurs at such a low level—priming speeds up identifying letters as forming a word, which one would expect to take place before you deliberate on the word's meaning.

Priming also reveals the massive parallelism of spreading activation: if seeing "water" activates the word "drink", it probably also activates "river", or "cup", or "splash"... and this activation spreads, from the semantic linkage of concepts, all the way back to recognizing strings of letters.

Priming is subconscious and unstoppable, an artifact of the human neural architecture.  Trying to stop yourself from priming is like trying to stop the spreading activation of your own neural circuits.  Try to say aloud the color—not the meaning, but the color—of the following letter-string:  "GREEN"

In Mussweiler and Strack (2000), subjects were asked the anchoring question:  "Is the annual mean temperature in Germany higher or lower than 5 Celsius / 20 Celsius?"  Afterward, on a word-identification task, subjects presented with the 5 Celsius anchor were faster on identifying words like "cold" and "snow", while subjects with the high anchor were faster to identify "hot" and "sun".  This shows a non-adjustment mechanism for anchoring: priming compatible thoughts and memories.

The more general result is that completely uninformative, known false, or totally irrelevant "information" can influence estimates and decisions.  In the field of heuristics and biases, this more general phenomenon is known as contamination.  (Chapman and Johnson 2002.)

continue reading »

We Change Our Minds Less Often Than We Think

39 Eliezer_Yudkowsky 03 October 2007 06:14PM

"Over the past few years, we have discreetly approached colleagues faced with a choice between job offers, and asked them to estimate the probability that they will choose one job over another.  The average confidence in the predicted choice was a modest 66%, but only 1 of the 24 respondents chose the option to which he or she initially assigned a lower probability, yielding an overall accuracy rate of 96%."
       —Dale Griffin and Amos Tversky, "The Weighing of Evidence and the Determinants of Confidence."  (Cognitive Psychology, 24, pp. 411-435.)

When I first read the words above—on August 1st, 2003, at around 3 o'clock in the afternoon—it changed the way I thought.  I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided.  We change our minds less often than we think.  And most of the time we become able to guess what our answer will be within half a second of hearing the question.

How swiftly that unnoticed moment passes, when we can't yet guess what our answer will be; the tiny window of opportunity for intelligence to act.  In questions of choice, as in questions of fact.

continue reading »

Burdensome Details

30 Eliezer_Yudkowsky 20 September 2007 11:46PM

Followup toConjunction Fallacy

 "Merely corroborative detail, intended to give artistic verisimilitude to an otherwise bald and unconvincing narrative..."
            -- Pooh-Bah, in Gilbert and Sullivan's The Mikado

The conjunction fallacy is when humans rate the probability P(A&B) higher than the probability P(B), even though it is a theorem that P(A&B) <= P(B).  For example, in one experiment in 1981, 68% of the subjects ranked it more likely that "Reagan will provide federal support for unwed mothers and cut federal support to local governments" than that "Reagan will provide federal support for unwed mothers."

A long series of cleverly designed experiments, which weeded out alternative hypotheses and nailed down the standard interpretation, confirmed that conjunction fallacy occurs because we "substitute judgment of representativeness for judgment of probability".  By adding extra details, you can make an outcome seem more characteristic of the process that generates it.  You can make it sound more plausible that Reagan will support unwed mothers, by adding the claim that Reagan will also cut support to local governments.  The implausibility of one claim is compensated by the plausibility of the other; they "average out".

Which is to say:  Adding detail can make a scenario SOUND MORE PLAUSIBLE, even though the event necessarily BECOMES LESS PROBABLE.

If so, then, hypothetically speaking, we might find futurists spinning unconscionably plausible and detailed future histories, or find people swallowing huge packages of unsupported claims bundled with a few strong-sounding assertions at the center.

continue reading »

Conjunction Controversy (Or, How They Nail It Down)

26 Eliezer_Yudkowsky 20 September 2007 02:41AM

Followup toConjunction Fallacy

When a single experiment seems to show that subjects are guilty of some horrifying sinful bias - such as thinking that the proposition "Bill is an accountant who plays jazz" has a higher probability than "Bill is an accountant" - people may try to dismiss (not defy) the experimental data.  Most commonly, by questioning whether the subjects interpreted the experimental instructions in some unexpected fashion - perhaps they misunderstood what you meant by "more probable".

Experiments are not beyond questioning; on the other hand, there should always exist some mountain of evidence which suffices to convince you.  It's not impossible for researchers to make mistakes.  It's also not impossible for experimental subjects to be really genuinely and truly biased.  It happens.  On both sides, it happens.  We're all only human here.

If you think to extend a hand of charity toward experimental subjects, casting them in a better light, you should also consider thinking charitably of scientists.  They're not stupid, you know.  If you can see an alternative interpretation, they can see it too.  This is especially important to keep in mind when you read about a bias and one or two illustrative experiments in a blog post.  Yes, if the few experiments you saw were all the evidence, then indeed you might wonder.  But you might also wonder if you're seeing all the evidence that supports the standard interpretation.  Especially if the experiments have dates on them like "1982" and are prefaced with adjectives like "famous" or "classic".

continue reading »

View more: Prev | Next