Inferential distance is an extremely handy phrase. I was actually unaware of it (an example of distance?) until today, but it's definitely related!
(On an off-topic note, this is my first post on LW and my first chance to tell you that I mentioned you in a post I wrote when I found Prediction Book: (This site isn’t new to rationalists: Eliezer and the LessWrong community noticed it a couple years ago, and LessWrong’er Gwern has been using it to – among other things – track inTrade predictions.)
Interesting post, but one of your commenters was right, I think - at least, I thought I knew all the active PBers, and you don't seem to be one of them.
I'm mostly been using it to track my predictions about the winner of each football game, but have my preferences set to leave predictions private.
As expected, I'm inappropriately confident at most levels of "confidence feeling" except the very high levels, where my accuracy can be more attributed to luck and a small sample size.
I appreciate the link, but I wish that you would comment more than that. Eliezer's post is is excellent and topical. It's directly relevant as a source for discussion here. I appreciate you linking it, because I was going to look up that very post directly after reading this one by JesseGalef. It is definitely good for you to link to it and I upvoted appropriately.
However, if I were the OP, I think that I would be hurt reading your response. Having put work into a post including original cites and examples, I could easily interpret your post as dismissing mine as inferior or worthless compared to his. I don't think you intended to do so, but I fear you may be defecting by accident. In the long run, a culture like this could impact people's willingness to make new posts.
I like when others come up with independent identical conclusions; it makes our understanding stronger as a whole. I also like when people link to other posts so that we can share our points of reference, as you've done. I just wish that when people do the latter, they also take care to also encourage the former at the same time.
However, if I were the OP, I think that I would be hurt reading your response. Having put work into a post including original cites and examples, I could easily interpret your post as dismissing mine as inferior or worthless compared to his.
It's pretty much customary on LW to provide links to related articles; doing so shouldn't be interpreted as a dismissal. Though it might be defecting by accident in some other context, that's not really the case here.
Thanks for the clarification - this is my first post on LW and wasn't sure how to interpret the "link" comments.
As it was, I'd upvoted them because I appreciate knowing what else I'd probably enjoy reading - there's so much material and it really helps having you guys pointing to relevant articles. It's good to know they're intended that way, and not as admonitions for not already including those links.
Again, everyone, thanks for making me feel welcome!
Thanks for the clarification - this is my first post on LW and wasn't sure how to interpret the "link" comments.
As it was, I'd upvoted them because I appreciate knowing what else I'd probably enjoy reading - there's so much material and it really helps having you guys pointing to relevant articles. It's good to know they're intended that way, and not as admonitions for not already including those links.
Again, everyone, thanks for making me feel welcome!
Thanks for the clarification - this is my first post on LW and wasn't sure how to interpret the link comments.
As it was, I'd upvoted the links because I appreciate knowing what else I'd probably enjoy reading - there's so much material and it really helps having you guys pointing to relevant articles. It's good to know they're intended that way, and not as admonitions for not already including those links.
Again, everyone, thanks for making me feel welcome!
ಠ_ಠ
When I was young people on LW didn't use pictures. There was text. And if you complained we threw another dozen links to walls of text at you.
We need our diagrams of alien invaders in order to recognize the aliens when they appear among us. Of the dangerous mutants, too.
They where there a few I admit. But they where shameful and modest. And we didn't talk about them. Look at how wonderfully sloppy those graphs where! These modern things are so prideful and vain.
Indeed - when I was young, we didn't use emoticons. We typed "emote smile" and let the MUD client fill in the rest.
... Too nerdy?
I am trying to overcome my future shock at the strange society LW has become in the fantastic, futuristic and frightening year of 2011.
Interesting study. It's definitely the intuition of LWers that people tend to underestimate the distance between your mind and someone else's mind. I think here this is commonly discussed as underestimating inferential distances. These ideas seem related to false consensus (actually, I'm interested in seeing some meta research about this). It seems like there should be some generalization of these ideas.
I thought we called this Typical Mind Fallacy. Although that's more naming the problem rather than talking about how or why it might occur.
They weren't playing the game right. The way to correctly play the game, especially among siblings, is for a person to always pick the same song. For example, if I'm tapping, I'm tapping 'Jingle Bells'. And we have a near 100% success rate. (It is not quite 100% due to the initial learning curve, but the success rate then steadily improves over time.)
That said... I'm tapping a tune as I type on this keyboard. Can you guess it??
Is it Cthulhu Lives!?
The Deep Ones wait you know
Swimming in the sea
Their numbers they will grow
Swimming safe and freeHe's not dead but dreams
Until the fateful day
When they set the Old Ones free
On Mankinds final day!Oh! Cthulhu Lives! Cthulhu Lives, deep down in the sea
In the city of R'Lyeh waiting to be freed
Oh! Cthulhu Lives! Cthulhu Lives, deep down in the sea
In the city of R'Lyeh waiting to be freed
Love it!
That brings to mind a fantastic set of posts on Mind Your Decisions (game theory blog) about focal points and coordination problems. If there's anything identifying about one of the songs - even being first on the list - it's a good idea to choose that one.
... Man, I bet psych researchers hate people like us.
Have you ever played the "Shadow of the Washington Monument" game?
It's played almost like 20 questions, except that it only takes one question if the game is played right.
OK, let's play: I'm thinking of a person, place or thing. Can you guess who, where, or what it is, by having me answer your yes or no questions? The goal is to guess it with as few questions asked as possible.
Huh. I was going to ask, 'Is it bigger than a breadbox?'.
Google predicts I would have asked, 'Animal, mineral or vegetable?'
Have you ever played the "Shadow of the Washington Monument" game? It's played almost like 20 questions, except that it only takes one question if the game is played right.
Is it underneath the Washington Monument?
1) I suppose this is to be expected given priming, anchoring, and self-anchoring, but it's a worrying bias nonetheless. It certainly does help explain why inferential distances feel so hard to bridge.
2) I like the picture--it's a good visual metaphor for the point you're making.
3) I don't think the table of scientific words is good example--it's not that common people don't know anything about these words, they're simply used to using them in a nontechnical context whereas the scientists are used to using them in a technical context. The scientists' uses of the words are not more valid or more informed than the laymans', they're just contextually different. Many of the examples in the table (especially "values," "bias," and "error") aren't the result of a knowledge gap, but of a simple definitional dispute.
Many of the examples in the table (especially "values," "bias," and "error") aren't the result of a knowledge gap, but of a simple definitional dispute.
It's not clear to me which is the case, actually. It would be difficult to dispute the assertion that the average layman is almost always primed to read "positive" as "good" rather than "present" or "upward," but that doesn't indicate whether or not he's actually aware of those alternate uses. Maybe he's never been exposed to scientific literature - that wouldn't exactly be shocking.
I wish I could access the original paper the table was published in. Alas!
Using technical definitions and ignoring folk meanings is something I've been noticing more in myself and others lately. Until I started making the distinction when listening to others I never realized how painfully bad it must sound when I do it.
"The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it's like to lack that knowledge. When they're tapping, they can't imagine what it's like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has "cursed" us. And it becomes difficult or us to share our knowledge with others, because we can't readily re-create our listeners' state of mind."
Imagine that the tapper is familiar with a song and expects the listener to be familiar with it, but he doesn’t know the song’s title, or any word that he would expect the listener to recognise as a name for it. If he taps out a rhythm from the song, he will have the song ‘playing’ in his head as he does so. This might lead him to overestimate how accurate the rhythm is, or how easy the song is to distinguish from that rhythm alone. When he hears the rhythm that he is tapping out, internally he also hears the song, making it very difficult to judge what it would be like for someone just to hear the rhythm.
Although it would be difficult to create an experiment in which knowledge of a popular song’s title and “musical knowledge” of the song are rigorously separated, we see that the tappers have two kinds of knowledge that the listener lacks: the song title, and musical knowledge of the song – i.e. the ability to replay the song with decent fidelity in one’s mind.
Chip and Dan Heath suggest that knowledge of the song’s title is the critical knowledge that causes the tappers to be overconfident about how likely the listeners are to recognise the song. This is a complex statement about the way in which the human brain works, but we don’t yet understand the brain in great detail; as far as I am aware there is no particular neuroscientific evidence that should cause us to believe that knowledge of the song title, rather than musical knowledge of the song, is the crux of the problem. Therefore I am not inclined to view their explanation as authoritative, and (assuming that there isn't just some problem with the experiment) on the strength of introspection I lean towards the idea that musical knowledge has the greater effect in causing this overconfidence.
The failure to try to correct for this bias could be regarded as an example of irrationally expecting short inferential distances, but I view that as being more related to the understanding of words and concepts - complex areas of the map, that are charted on one person's map in a particular way but not on another's. I think a better fit would be the mind projection fallacy: the error of projecting the properties of one's own mind into the external world. In this case the property being projected is the person's internal soundtrack accompanying the rhythm that he is physically tapping.
Actually, expecting short inferential differences is a special case of the mind projection fallacy – the property being projected is the detailed structure of words and reductions in someone’s map of the territory; the person in question fails to realise that the reductions, the causal relationships and the subtle definitional changes that accompany words in his mental model do not accompany those words when they are processed by the brains of other people.
So, “the curse of knowledge” is essentially the problem of the mind projection fallacy as it applies to a given person’s level of knowledge (of various kinds) with respect to the knowledge possessed by others, and this knowledge need not be encoded verbally.
And it is "dumbing things down", since the not ignorant general understanding, that is of the well-educated, but not necessarily scientifically-educated, population has similar usages. Where do you think the scientific usages came from?
At the very least "aerosol", "uncertainty", and "positive" have the "public" connotations even in well-educated humanities circles. There are some terms of science that simply are used differently, positive probably the most obvious.
i see...a blob. where I know the "baby" is. I'm sure it'd resolve into a baby if i let my perceptions keep chewing on it, but I happen to be in an abnormal state of tiredness right now.
I get this to crippling extremes. Not only with science stuff, but with every single culture or subculture I've interacted with, and a bunch of specialized sites like TVtropes or LessWrong. Not to mention things I've just observed on my own and never learned or come up with a name for. I'm basically incapable of communicating anything with anyone at all by this point.
What is being discussed is the differences between the 4 stages of learning.
1st A person doesn't know what they don't know. 2nd. A person knows that they don't know. 3rd. A person needs conscious thinking to know what they know. 4th. A person knows what they know automatically.
The expert who is unable to express themselves to the layman is in the 4th stage. A teachers best position is in the 3rd stage, for they are conscious about what they know and hence can assist others in knowing. A person in the 2nd stage is ready to learn, they accept they don't know it all, but they wouldn't be an effective teacher. A person in the 1st stage isn't ready to learn and couldn't teach either.
This is interestingly reversed from what I expected. I would have put it at
My preferred term for this phenomenon is "expert blindspot". See also "illusion of transparency": http://lesswrong.com/lw/ke/illusion_of_transparency_why_no_one_understands/
I notice that general vocabulary has a lot more value judgement in it. Most of this is likely due to descriptive, value-neutral words being generated every time there is a new thing to describe, whereas new good or bad properties are rare (or, at least, rarer: see hacker jargon for an example of specialized vocabulary with lots of value judgements, yet even more value-neutral words). I suppose another effect is that, when not thinking deeply about a problem (and thus not creating jargon), it is convenient to rely on general good/bad feelings about things.
[crossposted at Measure of Doubt]
What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it's not a reference to the Garden of Eden story. I'm referring to a particular psychological phenomenon that can make our messages backfire if we're not careful.
Communication isn't a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they'll understand and be persuaded by. A common habit is to use ourselves as a mental model - assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent - especially with people similar to us - but other times our efforts fall flat. You can present the best argument you've ever heard, only to have it fall on dumb - sorry, deaf - ears.
That's not necessarily your fault - maybe they're just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can't do it if we're stuck in our head, unable to see things from our audience's perspective. We need to figure out what words will work.
Unfortunately, that's where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner - the designated listener - was asked to guess the song. How do you think they did?
Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly - a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?
They lost perspective because they were "cursed" with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:
So it goes with communicating complex information. Because we have all the background knowledge and understanding, we're overconfident that what we're saying is clear to everyone else. WE know what we mean! Why don't they get it? It's tough to remember that other people won't make the same inferences, have the same word-meaning connections, or share our associations.
It's particularly important in science education. The more time a person spends in a field, the more the field's obscure language becomes second nature. Without special attention, audiences might not understand the words being used - or worse yet, they might get the wrong impression.
Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.
What great examples! Even though the scientific terms are technically correct in context, they're obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.
We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as "dumbing down" a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.
That's not dumbing things down, it's showing a mastery of the concepts. And he was able to do it by overcoming the "curse of knowledge," seeing the issue from other people's perspective. Kudos to him - it's an essential part of science education, and something I really admire.
P.S. - By the way, I chose that image for a reason: I bet once you see the baby in the tree you won’t be able to ‘unsee’ it. (image via Richard Wiseman)