Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Overcoming the Curse of Knowledge

42 Post author: JesseGalef 18 October 2011 05:39PM

[crossposted at Measure of Doubt]

What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it's not a reference to the Garden of Eden story. I'm referring to a particular psychological phenomenon that can make our messages backfire if we're not careful.

Communication isn't a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they'll understand and be persuaded by. A common habit is to use ourselves as a mental model - assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent - especially with people similar to us - but other times our efforts fall flat. You can present the best argument you've ever heard, only to have it fall on dumb - sorry, deaf - ears.

That's not necessarily your fault - maybe they're just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can't do it if we're stuck in our head, unable to see things from our audience's perspective. We need to figure out what words will work.

Unfortunately, that's where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner - the designated listener - was asked to guess the song. How do you think they did?

Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly - a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?

They lost perspective because they were "cursed" with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:

 

"The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it's like to lack that knowledge. When they're tapping, they can't imagine what it's like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has "cursed" us. And it becomes difficult or us to share our knowledge with others, because we can't readily re-create our listeners' state of mind."

 

So it goes with communicating complex information. Because we have all the background knowledge and understanding, we're overconfident that what we're saying is clear to everyone else. WE know what we mean! Why don't they get it? It's tough to remember that other people won't make the same inferences, have the same word-meaning connections, or share our associations.

It's particularly important in science education. The more time a person spends in a field, the more the field's obscure language becomes second nature. Without special attention, audiences might not understand the words being used - or worse yet, they might get the wrong impression.

Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.

What great examples! Even though the scientific terms are technically correct in context, they're obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.

We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as "dumbing down" a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.

That's not dumbing things down, it's showing a mastery of the concepts. And he was able to do it by overcoming the "curse of knowledge," seeing the issue from other people's perspective. Kudos to him - it's an essential part of science education, and something I really admire.

P.S. - By the way, I chose that image for a reason: I bet once you see the baby in the tree you won’t be able to ‘unsee’ it. (image via Richard Wiseman)

Comments (54)

Comment author: [deleted] 18 October 2011 08:31:12PM 6 points [-]

Because we have all the background knowledge and understanding, we're overconfident that what we're saying is clear to everyone else. WE know what we mean!

As I see it, this is the biggest obstacle to LW's mission.

Comment author: jsalvatier 18 October 2011 06:10:56PM 6 points [-]

Interesting study. It's definitely the intuition of LWers that people tend to underestimate the distance between your mind and someone else's mind. I think here this is commonly discussed as underestimating inferential distances. These ideas seem related to false consensus (actually, I'm interested in seeing some meta research about this). It seems like there should be some generalization of these ideas.

Comment author: mstevens 19 October 2011 09:18:39AM 1 point [-]

I thought we called this Typical Mind Fallacy. Although that's more naming the problem rather than talking about how or why it might occur.

Comment author: [deleted] 19 October 2011 11:10:44AM 5 points [-]

"The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it's like to lack that knowledge. When they're tapping, they can't imagine what it's like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has "cursed" us. And it becomes difficult or us to share our knowledge with others, because we can't readily re-create our listeners' state of mind."

Imagine that the tapper is familiar with a song and expects the listener to be familiar with it, but he doesn’t know the song’s title, or any word that he would expect the listener to recognise as a name for it. If he taps out a rhythm from the song, he will have the song ‘playing’ in his head as he does so. This might lead him to overestimate how accurate the rhythm is, or how easy the song is to distinguish from that rhythm alone. When he hears the rhythm that he is tapping out, internally he also hears the song, making it very difficult to judge what it would be like for someone just to hear the rhythm.

Although it would be difficult to create an experiment in which knowledge of a popular song’s title and “musical knowledge” of the song are rigorously separated, we see that the tappers have two kinds of knowledge that the listener lacks: the song title, and musical knowledge of the song – i.e. the ability to replay the song with decent fidelity in one’s mind.

Chip and Dan Heath suggest that knowledge of the song’s title is the critical knowledge that causes the tappers to be overconfident about how likely the listeners are to recognise the song. This is a complex statement about the way in which the human brain works, but we don’t yet understand the brain in great detail; as far as I am aware there is no particular neuroscientific evidence that should cause us to believe that knowledge of the song title, rather than musical knowledge of the song, is the crux of the problem. Therefore I am not inclined to view their explanation as authoritative, and (assuming that there isn't just some problem with the experiment) on the strength of introspection I lean towards the idea that musical knowledge has the greater effect in causing this overconfidence.

The failure to try to correct for this bias could be regarded as an example of irrationally expecting short inferential distances, but I view that as being more related to the understanding of words and concepts - complex areas of the map, that are charted on one person's map in a particular way but not on another's. I think a better fit would be the mind projection fallacy: the error of projecting the properties of one's own mind into the external world. In this case the property being projected is the person's internal soundtrack accompanying the rhythm that he is physically tapping.

Actually, expecting short inferential differences is a special case of the mind projection fallacy – the property being projected is the detailed structure of words and reductions in someone’s map of the territory; the person in question fails to realise that the reductions, the causal relationships and the subtle definitional changes that accompany words in his mental model do not accompany those words when they are processed by the brains of other people.

So, “the curse of knowledge” is essentially the problem of the mind projection fallacy as it applies to a given person’s level of knowledge (of various kinds) with respect to the knowledge possessed by others, and this knowledge need not be encoded verbally.

Comment author: [deleted] 18 October 2011 06:25:17PM 5 points [-]

1) I suppose this is to be expected given priming, anchoring, and self-anchoring, but it's a worrying bias nonetheless. It certainly does help explain why inferential distances feel so hard to bridge.

2) I like the picture--it's a good visual metaphor for the point you're making.

3) I don't think the table of scientific words is good example--it's not that common people don't know anything about these words, they're simply used to using them in a nontechnical context whereas the scientists are used to using them in a technical context. The scientists' uses of the words are not more valid or more informed than the laymans', they're just contextually different. Many of the examples in the table (especially "values," "bias," and "error") aren't the result of a knowledge gap, but of a simple definitional dispute.

Comment author: KatieHartman 18 October 2011 06:58:15PM 4 points [-]

Many of the examples in the table (especially "values," "bias," and "error") aren't the result of a knowledge gap, but of a simple definitional dispute.

It's not clear to me which is the case, actually. It would be difficult to dispute the assertion that the average layman is almost always primed to read "positive" as "good" rather than "present" or "upward," but that doesn't indicate whether or not he's actually aware of those alternate uses. Maybe he's never been exposed to scientific literature - that wouldn't exactly be shocking.

I wish I could access the original paper the table was published in. Alas!

Comment author: wallowinmaya 18 October 2011 06:48:55PM *  12 points [-]

Great post. See also this related post by Eliezer.

Comment author: KenChen 18 October 2011 08:07:38PM 4 points [-]
Comment author: Xachariah 19 October 2011 02:48:43AM *  4 points [-]

I appreciate the link, but I wish that you would comment more than that. Eliezer's post is is excellent and topical. It's directly relevant as a source for discussion here. I appreciate you linking it, because I was going to look up that very post directly after reading this one by JesseGalef. It is definitely good for you to link to it and I upvoted appropriately.

However, if I were the OP, I think that I would be hurt reading your response. Having put work into a post including original cites and examples, I could easily interpret your post as dismissing mine as inferior or worthless compared to his. I don't think you intended to do so, but I fear you may be defecting by accident. In the long run, a culture like this could impact people's willingness to make new posts.

I like when others come up with independent identical conclusions; it makes our understanding stronger as a whole. I also like when people link to other posts so that we can share our points of reference, as you've done. I just wish that when people do the latter, they also take care to also encourage the former at the same time.

Comment author: [deleted] 19 October 2011 12:14:18PM 3 points [-]

However, if I were the OP, I think that I would be hurt reading your response. Having put work into a post including original cites and examples, I could easily interpret your post as dismissing mine as inferior or worthless compared to his.

It's pretty much customary on LW to provide links to related articles; doing so shouldn't be interpreted as a dismissal. Though it might be defecting by accident in some other context, that's not really the case here.

Comment author: JesseGalef 19 October 2011 11:15:58PM 3 points [-]

Thanks for the clarification - this is my first post on LW and wasn't sure how to interpret the "link" comments.

As it was, I'd upvoted them because I appreciate knowing what else I'd probably enjoy reading - there's so much material and it really helps having you guys pointing to relevant articles. It's good to know they're intended that way, and not as admonitions for not already including those links.

Again, everyone, thanks for making me feel welcome!

Comment author: RobertLumley 19 October 2011 07:31:57PM *  1 point [-]

Well he's so he probably doesn't know that.

I've been here for several months, and I'm not sure I'd have said it was "customary"... (But that being said, I didn't see the OP as offensive, but I see how it could be accidentally read that way.)

Comment author: RobertLumley 19 October 2011 10:48:29PM *  0 points [-]

Does anyone know why the link I tried to link in that comment wouldn't show up? When I edit it, the syntax is correct...

The text I tried to post was "new to LW" which will show up properly in this comment but not the other one...

Comment author: wallowinmaya 19 October 2011 01:08:12PM 1 point [-]

I've tweaked my comment. I hope it's now nice enough ;)

Comment author: byrnema 18 October 2011 10:39:56PM 4 points [-]

They weren't playing the game right. The way to correctly play the game, especially among siblings, is for a person to always pick the same song. For example, if I'm tapping, I'm tapping 'Jingle Bells'. And we have a near 100% success rate. (It is not quite 100% due to the initial learning curve, but the success rate then steadily improves over time.)

That said... I'm tapping a tune as I type on this keyboard. Can you guess it??

Comment author: lessdazed 18 October 2011 11:59:15PM *  8 points [-]

Is it Cthulhu Lives!?

The Deep Ones wait you know
Swimming in the sea
Their numbers they will grow
Swimming safe and free

He's not dead but dreams
Until the fateful day
When they set the Old Ones free
On Mankinds final day!

Oh! Cthulhu Lives! Cthulhu Lives, deep down in the sea
In the city of R'Lyeh waiting to be freed
Oh! Cthulhu Lives! Cthulhu Lives, deep down in the sea
In the city of R'Lyeh waiting to be freed

Comment author: JesseGalef 18 October 2011 11:45:09PM *  3 points [-]

Love it!

That brings to mind a fantastic set of posts on Mind Your Decisions (game theory blog) about focal points and coordination problems. If there's anything identifying about one of the songs - even being first on the list - it's a good idea to choose that one.

... Man, I bet psych researchers hate people like us.

Comment author: pedanterrific 19 October 2011 04:53:14AM 3 points [-]

focal points and coordination problems

This post was what motivated me to get the book. It's a great book.

Man, I bet psych researchers hate people like us.

As both a 'person like us' and a (prospective) psych researcher, I can say: nah, we just toss us out as outliers.

Comment author: Alicorn 18 October 2011 11:06:52PM 6 points [-]

Is it Jingle Bells?!

Comment author: byrnema 18 October 2011 11:10:24PM *  4 points [-]

Y E S ... Y E S ... Y-Y E S-SSSS ...

Comment author: ciphergoth 20 October 2011 02:32:29PM 1 point [-]

By my experience, at least in the UK there is one tune that everyone I've tried so far has recognised from my tapping it out :-)

Comment author: lessdazed 19 October 2011 04:40:22PM 0 points [-]

Have you ever played the "Shadow of the Washington Monument" game?

It's played almost like 20 questions, except that it only takes one question if the game is played right.

OK, let's play: I'm thinking of a person, place or thing. Can you guess who, where, or what it is, by having me answer your yes or no questions? The goal is to guess it with as few questions asked as possible.

Comment author: byrnema 19 October 2011 04:56:54PM 1 point [-]

Ha ha, OK.

Nevertheless I get one question. Can you guess what I'll ask?

Comment author: lessdazed 19 October 2011 05:11:03PM 1 point [-]

It appears I don't have to guess. ;-)

Comment author: byrnema 20 October 2011 04:06:02PM 1 point [-]

Huh. I was going to ask, 'Is it bigger than a breadbox?'.

Google predicts I would have asked, 'Animal, mineral or vegetable?'

Comment author: jimrandomh 19 October 2011 04:55:28PM 1 point [-]

Have you ever played the "Shadow of the Washington Monument" game? It's played almost like 20 questions, except that it only takes one question if the game is played right.

Is it underneath the Washington Monument?

Comment author: billswift 18 October 2011 06:13:50PM 4 points [-]

You mean overconfident by a factor of 20.

Comment author: JesseGalef 18 October 2011 06:21:18PM *  3 points [-]

Thanks, good catch!

[EDIT: For the record, I had accidentally written "by a factor of 40." I corrected it in the article for future readers.]

Comment author: gwern 18 October 2011 06:50:50PM 15 points [-]

No links for 'inferential distance'? The phrase itself is an inferential distance...

Comment author: JesseGalef 18 October 2011 07:28:53PM *  4 points [-]

Inferential distance is an extremely handy phrase. I was actually unaware of it (an example of distance?) until today, but it's definitely related!

(On an off-topic note, this is my first post on LW and my first chance to tell you that I mentioned you in a post I wrote when I found Prediction Book: (This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)

Comment author: matt 26 October 2011 07:16:39PM 1 point [-]

Hopefully most LWers know PBook - it was written and is hosted by LW's hosts TrikeApps.

Comment author: gwern 18 October 2011 08:35:49PM 1 point [-]

Interesting post, but one of your commenters was right, I think - at least, I thought I knew all the active PBers, and you don't seem to be one of them.

Comment author: JesseGalef 18 October 2011 08:47:31PM 3 points [-]

I'm mostly been using it to track my predictions about the winner of each football game, but have my preferences set to leave predictions private.

As expected, I'm inappropriately confident at most levels of "confidence feeling" except the very high levels, where my accuracy can be more attributed to luck and a small sample size.

Comment author: nazgulnarsil 19 October 2011 07:54:56PM 3 points [-]

Using technical definitions and ignoring folk meanings is something I've been noticing more in myself and others lately. Until I started making the distinction when listening to others I never realized how painfully bad it must sound when I do it.

Comment author: [deleted] 18 October 2011 09:57:59PM *  13 points [-]

ಠ_ಠ

When I was young people on LW didn't use pictures. There was text. And if you complained we threw another dozen links to walls of text at you.

Comment author: saturn 18 October 2011 11:52:19PM 5 points [-]
Comment author: Normal_Anomaly 19 October 2011 12:12:24AM *  3 points [-]

<lame excuse> Those aren't pictures, they're diagrams! </excuse>

(Upvoted.)

Comment author: Kaj_Sotala 20 October 2011 09:26:31PM *  5 points [-]

We need our diagrams of alien invaders in order to recognize the aliens when they appear among us. Of the dangerous mutants, too.

Comment author: [deleted] 19 October 2011 06:39:30AM *  5 points [-]

They where there a few I admit. But they where shameful and modest. And we didn't talk about them. Look at how wonderfully sloppy those graphs where! These modern things are so prideful and vain.

Comment author: Nornagest 18 October 2011 10:21:31PM 2 points [-]

A fine sentiment, but the emoticon takes something from it.

Comment author: JesseGalef 18 October 2011 11:48:05PM 8 points [-]

Indeed - when I was young, we didn't use emoticons. We typed "emote smile" and let the MUD client fill in the rest.

... Too nerdy?

Comment author: [deleted] 19 October 2011 06:37:47AM *  3 points [-]

I am trying to overcome my future shock at the strange society LW has become in the fantastic, futuristic and frightening year of 2011.

Comment author: gwern 20 October 2011 12:29:12AM 0 points [-]

Oh I dunno, we're barely at shock level 2 or 3 here most of the time.

Comment author: pedanterrific 19 October 2011 05:47:54AM 4 points [-]

(ノಠ益ಠ)ノ彡┻━┻

Comment author: Normal_Anomaly 20 October 2011 12:34:30AM 0 points [-]

Firefox isn't rendering that right. What is it? An emoticon of a thumbs up?

Comment author: pedanterrific 20 October 2011 12:52:30AM 2 points [-]
Comment author: billswift 18 October 2011 06:20:04PM 4 points [-]

And it is "dumbing things down", since the not ignorant general understanding, that is of the well-educated, but not necessarily scientifically-educated, population has similar usages. Where do you think the scientific usages came from?

Comment author: tenshiko 21 October 2011 03:20:57AM 0 points [-]

At the very least "aerosol", "uncertainty", and "positive" have the "public" connotations even in well-educated humanities circles. There are some terms of science that simply are used differently, positive probably the most obvious.

Comment author: fractalman 26 June 2013 09:06:36AM 0 points [-]

i see...a blob. where I know the "baby" is. I'm sure it'd resolve into a baby if i let my perceptions keep chewing on it, but I happen to be in an abnormal state of tiredness right now.

Comment author: Armok_GoB 18 November 2011 06:52:58PM 0 points [-]

I get this to crippling extremes. Not only with science stuff, but with every single culture or subculture I've interacted with, and a bunch of specialized sites like TVtropes or LessWrong. Not to mention things I've just observed on my own and never learned or come up with a name for. I'm basically incapable of communicating anything with anyone at all by this point.

Comment author: Peacewise 24 October 2011 03:41:46AM 0 points [-]

What is being discussed is the differences between the 4 stages of learning.

1st A person doesn't know what they don't know. 2nd. A person knows that they don't know. 3rd. A person needs conscious thinking to know what they know. 4th. A person knows what they know automatically.

The expert who is unable to express themselves to the layman is in the 4th stage. A teachers best position is in the 3rd stage, for they are conscious about what they know and hence can assist others in knowing. A person in the 2nd stage is ready to learn, they accept they don't know it all, but they wouldn't be an effective teacher. A person in the 1st stage isn't ready to learn and couldn't teach either.

Comment author: pedanterrific 24 October 2011 03:58:55AM 0 points [-]

This is interestingly reversed from what I expected. I would have put it at

  1. The deadly Unk-Unks.
  2. Known unknowns.
  3. Gets it, but can't express it.
  4. Groks it, and can teach it.
Comment author: gusl 20 October 2011 05:17:20PM 0 points [-]

My preferred term for this phenomenon is "expert blindspot". See also "illusion of transparency": http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/

Comment author: MixedNuts 20 October 2011 01:46:53PM 0 points [-]

I notice that general vocabulary has a lot more value judgement in it. Most of this is likely due to descriptive, value-neutral words being generated every time there is a new thing to describe, whereas new good or bad properties are rare (or, at least, rarer: see hacker jargon for an example of specialized vocabulary with lots of value judgements, yet even more value-neutral words). I suppose another effect is that, when not thinking deeply about a problem (and thus not creating jargon), it is convenient to rely on general good/bad feelings about things.