Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Original post: http://bearlamp.com.au/hedging/
- Men are evil
- All men are evil
- Some men are evil
- most men are evil
- many men are evil
- I think men are evil
- I think all men are evil
- I think some men are evil
- I think most men are evil
"I think" weakens your relationship or belief in the idea, hedges that I usually encourage are the some|most type. It weakens your strength of idea but does not reduce the confidence of it.
- I 100% believe this happens 80% or more of the time (most men are evil)
- I 75% believe that this happens 100% of the time (I think all men are evil)
- I 75% believe this happens 20% of the time (I think that some men are evil)
- I 100% believe that this happens 20% of the time (some men are evil)
- I (Reader Interprets)% believe that this happens (Reader Interprets)% of the time (I think men are evil)
They are all hedges. I only like some of them. When you hedge - I recommend using the type that doesn't detract from the projected belief but instead detracts from the expected effect on the world. Which is to say - be confident of weak effects, rather than unconfident of strong effects.
This relates to filters in that some people will automatically add the "This person thinks..." filter to any incoming information. It's not good or bad if you do/don't filter, just a fact about your lens of the world. If you don't have this filter in place, you might find yourself personally attached to your words while other's remain detached from words that seem like they should be more personally attached to. This filter might explain the difference.
This also relates to Personhood and the way we trust incoming information from some sources. When we are very young we go through a period of trusting anything said to us, and at some point experience failures when we do trust. We also discover lying, and any parent will be able to tell you of the genuine childish glee when their children realise they can lie. These experiences shape us into adults. We have to trust some sources, we don't have enough time to be sceptical of all knowledge ever and sometimes we outsource to proven credentialed professionals i.e. doctors. Sometimes those professionals get it wrong.
This also relates to in-groups and out-groups because listeners who believe they are in your in-group are likely to interpret ambiguous hedges in a neutral to positive direction and listeners who believe they are in the out-group of the message are likely to interpret your ambiguous hedges in a neutral or negative direction. Which is to say that people who already agree that All men are evil, are likely to "know what you mean" when you say, "all men are evil" and people who don't agree that all men are evil will read a whole pile of "how wrong could you be" into the statement, "all men are evil".
Communication is hard. I know no one is going to argue with my example because I already covered that in an earlier post.
Meta: this took 1.5hrs to write.
There's a lot that I really like about communicating via writing. Communicating in person is sometimes frustrating for me, and communicating via writing addresses a lot of those frustrations:
1) I often want to make a point that depends on the other person knowing X. In person, if I always paused and did the following, it'd add a lot of friction to conversations: "Wait, do you know X? If yes, good, I'll continue. If no, let me think about how to explain it briefly. Or do you want me to explain it in more depth? Or do you want to try to proceed without knowing X and see how it goes?". But if I don't do so, then it risks miscommunication (because the other person may not have the dependency X).
In writing, I could just link to an article. If the other person doesn't have the dependency, they have options. They could try to proceed without knowing X and see how it goes. If it doesn't work out, they could come back and read the link. Or they could read the link right away. And in reading the link, they have their choice of how deeply they want to read. Ie. they could just skim if they want to.
Alternatively, if you don't have something to link to, you could add a footnote. I think that a UI like Medium's side comments is very preferable to putting the footnotes at the bottom of the page. I hope to see this adopted across the internet some time in the next 5 years or so.
2) I think that in general, being precise about what you're saying is actually quite difficult/time consuming*. For example, I don't really mean what I just said. I'm actually not sure how often that it's difficult/time consuming to be precise with what you're saying. And I'm not sure how often it's useful to be precise about what you're saying (or really, more precise...whatever that means...). I guess what I really mean is that it happens often enough where it's a problem. Or maybe just that for me, it happens enough where I find it to be a problem.
Anyway, I find that putting quotes around what I say is a nice way to mitigate this problem.
Ex. It's "in my nature" to be strategic.
The quotes show that the word inside them isn't precisely what I mean, but that it's close enough to what I mean that it should communicate the gist of it. I sense that this communication often happens through empathetic inference.
*I also find that I feel internal and external pressure to be consistent with what I say, even if I know I'm oversimplifying. This is a problem and has negatively effected me. I recently realized what a big problem it is, and will try very hard to address it (or really, I plan on trying very hard but I'm not sure blah blah blah blah blah...).
Note 1: I find internal conversation/thinking as well as interpersonal conversation to be "chaotic". (What follows is rant-y and not precisely what I believe. But being precise would take too long, and I sense that the rant-y tone helps to communicate without detracting from the conversation by being uncivil.) It seems that a lot of other people (much less so on LW) have more "organized" thinking patterns. I can't help but think that that's BS. Well, maybe they do, but I sense that they shouldn't. Reality is complicated. People seem to oversimplify things a lot, and to think in terms of black-white. When you do that, I could see how ones thoughts could be "organized". But when you really try to deal with the complexities of reality... I don't understand how you could simultaneously just go through life with organized thoughts.
Note 2: I sense that this post somewhat successfully communicates my internal thought process and how chaotic it could be. I'm curious how this compares to other people. I should note that I was diagnosed with a mild-moderate case of ADHD when I was younger. But that was largely based off of iffy reporting from my teachers. They didn't realize how much conscious thought motivated my actions. Ie. I often chose to do things that seem impulsive because I judged it to be worth it. But given that my mind is always racing so fast, and that I have a good amount of trouble deciding to pay attention to anything other than the most interesting thing to me, I'd guess that I do have ADHD to some extent. I'm hesitant to make that claim without ever having been inside someone else's mind before though (how incredibly incredibly cool would that be!!!) - appearances could be deceiving.
3) It's easier to model and traverse the structure of a conversation/argument when it's in writing. You could break things into nested sections (which isn't always a perfect way to model the structure, but is often satisfactory). In person, I find that it's often quite difficult for two people (let alone multiple people) to stay in sync with the structure of the conversation. The outcome of this is that people rarely veer away from extremely superficial conversations. Granted, I haven't had the chance to talk to many smart people in real life, and so I don't have much data on how deep a conversation between two smart people could get. My guess is that it could get a lot deeper than what I'm used to, but that it'd be pretty hard to make real progress on a difficult topic without outlining and diagramming things out. (Note: I don't mean "deep as in emotional", I mean "deep as in nodes in a graph")
There are also a lot of other things to say about communicating in writing vs. in person, including:
- The value of the subtle things like nonverbal communication and pauses.
- The value of a conversation being continuous. When it isn't, you have to download the task over and over again.
- How much time you have to think things through before responding.
- I sense that people are way more careful in writing, especially when there's a record of it (rather than, say PM).
This is a discussion post, so feel free to comment on these things too (or anything else in the ballpark).
Quick summary: "Hidden rationalists" are what I call authors who espouse rationalist principles, and probably think of themselves as rational people, but don't always write on "traditional" Less Wrong-ish topics and probably haven't heard of Less Wrong.
I've noticed that a lot of my rationalist friends seem to read the same ten blogs, and while it's great to have a core set of favorite authors, it's also nice to stretch out a bit and see how everyday rationalists are doing cool stuff in their own fields of expertise. I've found many people who push my rationalist buttons in fields of interest to me (journalism, fitness, etc.), and I'm sure other LWers have their own people in their own fields.
So I'm setting up this post as a place to link to/summarize the work of your favorite hidden rationalists. Be liberal with your suggestions!
Another way to phrase this: Who are the people/sources who give you the same feelings you get when you read your favorite LW posts, but who many of us probably haven't heard of?
Here's my list, to kick things off:
- Peter Sandman, professional risk communication consultant. Often writes alongside Jody Lanard. Specialties: Effective communication, dealing with irrational people in a kind and efficient way, carefully weighing risks and benefits. My favorite recent post of his deals with empathy for Ebola victims and is a major, Slate Star Codex-esque tour de force. His "guestbook comments" page is better than his collection of web articles, but both are quite good.
- Doug McGuff, MD, fitness guru and author of the exercise book with the highest citation-to-page ratio of any I've seen. His big thing is "superslow training", where you perform short and extremely intense workouts (video here). I've been moving in this direction for about 18 months now, and I've been able to cut my workout time approximately in half without losing strength. May not work for everyone, but reminds me of Leverage Research's sleep experiments; if it happens to work for you, you gain a heck of a lot of time. I also love the way he emphasizes the utility of strength training for all ages/genders -- very different from what you'd see on a lot of weightlifting sites.
- Philosophers' Mail. A website maintained by applied philosophers at the School of Life, which reminds me of a hippy-dippy European version of CFAR (in a good way). Not much science, but a lot of clever musings on the ways that philosophy can help us live, and some excellent summaries of philosophers who are hard to read in the original. (Their piece on Vermeer is a personal favorite, as is this essay on Simon Cowell.) This recently stopped posting new material, but the School of Life now collects similar work through The Book of Life.
Summary: I don't think 'politics is the mind-killer' works well rthetorically. I suggest 'politics is hard mode' instead.
My usual first objection is that it seems odd to single politics out as a “mind-killer” when there’s plenty of evidence that tribalism happens everywhere. Recently, there has been a whole kerfuffle within the field of psychology about replication of studies. Of course, some key studies have failed to replicate, leading to accusations of “bullying” and “witch-hunts” and what have you. Some of the people involved have since walked their language back, but it was still a rather concerning demonstration of mind-killing in action. People took “sides,” people became upset at people based on their “sides” rather than their actual opinions or behavior, and so on.
Unless this article refers specifically to electoral politics and Democrats and Republicans and things (not clear from the wording), “politics” is such a frightfully broad category of human experience that writing it off entirely as a mind-killer that cannot be discussed or else all rationality flies out the window effectively prohibits a large number of important issues from being discussed, by the very people who can, in theory, be counted upon to discuss them better than most. Is it “politics” for me to talk about my experience as a woman in gatherings that are predominantly composed of men? Many would say it is. But I’m sure that these groups of men stand to gain from hearing about my experiences, since some of them are concerned that so few women attend their events.
In this article, Eliezer notes, “Politics is an important domain to which we should individually apply our rationality — but it’s a terrible domain in which to learn rationality, or discuss rationality, unless all the discussants are already rational.” But that means that we all have to individually, privately apply rationality to politics without consulting anyone who can help us do this well. After all, there is no such thing as a discussant who is “rational”; there is a reason the website is called “Less Wrong” rather than “Not At All Wrong” or “Always 100% Right.” Assuming that we are all trying to be more rational, there is nobody better to discuss politics with than each other.
The rest of my objection to this meme has little to do with this article, which I think raises lots of great points, and more to do with the response that I’ve seen to it — an eye-rolling, condescending dismissal of politics itself and of anyone who cares about it. Of course, I’m totally fine if a given person isn’t interested in politics and doesn’t want to discuss it, but then they should say, “I’m not interested in this and would rather not discuss it,” or “I don’t think I can be rational in this discussion so I’d rather avoid it,” rather than sneeringly reminding me “You know, politics is the mind-killer,” as though I am an errant child. I’m well-aware of the dangers of politics to good thinking. I am also aware of the benefits of good thinking to politics. So I’ve decided to accept the risk and to try to apply good thinking there. [...]
I’m sure there are also people who disagree with the article itself, but I don’t think I know those people personally. And to add a political dimension (heh), it’s relevant that most non-LW people (like me) initially encounter “politics is the mind-killer” being thrown out in comment threads, not through reading the original article. My opinion of the concept improved a lot once I read the article.
In the same thread, Andrew Mahone added, “Using it in that sneering way, Miri, seems just like a faux-rationalist version of ‘Oh, I don’t bother with politics.’ It’s just another way of looking down on any concerns larger than oneself as somehow dirty, only now, you know, rationalist dirty.” To which Miri replied: “Yeah, and what’s weird is that that really doesn’t seem to be Eliezer’s intent, judging by the eponymous article.”
Eliezer replied briefly, to clarify that he wasn't generally thinking of problems that can be directly addressed in local groups (but happen to be politically charged) as "politics":
Hanson’s “Tug the Rope Sideways” principle, combined with the fact that large communities are hard to personally influence, explains a lot in practice about what I find suspicious about someone who claims that conventional national politics are the top priority to discuss. Obviously local community matters are exempt from that critique! I think if I’d substituted ‘national politics as seen on TV’ in a lot of the cases where I said ‘politics’ it would have more precisely conveyed what I was trying to say.
But that doesn't resolve the issue. Even if local politics is more instrumentally tractable, the worry about polarization and factionalization can still apply, and may still make it a poor epistemic training ground.
A subtler problem with banning “political” discussions on a blog or at a meet-up is that it’s hard to do fairly, because our snap judgments about what counts as “political” may themselves be affected by partisan divides. In many cases the status quo is thought of as apolitical, even though objections to the status quo are ‘political.’ (Shades of Pretending to be Wise.)
Because politics gets personal fast, it’s hard to talk about it successfully. But if you’re trying to build a community, build friendships, or build a movement, you can’t outlaw everything ‘personal.’
And selectively outlawing personal stuff gets even messier. Last year, daenerys shared anonymized stories from women, including several that discussed past experiences where the writer had been attacked or made to feel unsafe. If those discussions are made off-limits because they relate to gender and are therefore ‘political,’ some folks may take away the message that they aren’t allowed to talk about, e.g., some harmful or alienating norm they see at meet-ups. I haven’t seen enough discussions of this failure mode to feel super confident people know how to avoid it.
Since this is one of the LessWrong memes that’s most likely to pop up in cross-subcultural dialogues (along with the even more ripe-for-misinterpretation “policy debates should not appear one-sided“…), as a first (very small) step, my action proposal is to obsolete the ‘mind-killer’ framing. A better phrase for getting the same work done would be ‘politics is hard mode’:
1. ‘Politics is hard mode’ emphasizes that ‘mind-killing’ (= epistemic difficulty) is quantitative, not qualitative. Some things might instead fall under Middlingly Hard Mode, or under Nightmare Mode…
2. ‘Hard’ invites the question ‘hard for whom?’, more so than ‘mind-killer’ does. We’re used to the fact that some people and some contexts change what’s ‘hard’, so it’s a little less likely we’ll universally generalize.
3. ‘Mindkill’ connotes contamination, sickness, failure, weakness. In contrast, ‘Hard Mode’ doesn’t imply that a thing is low-status or unworthy. As a result, it’s less likely to create the impression (or reality) that LessWrongers or Effective Altruists dismiss out-of-hand the idea of hypothetical-political-intervention-that-isn’t-a-terrible-idea. Maybe some people do want to argue for the thesis that politics is always useless or icky, but if so it should be done in those terms, explicitly — not snuck in as a connotation.
4. ‘Hard Mode’ can’t readily be perceived as a personal attack. If you accuse someone of being ‘mindkilled’, with no context provided, that smacks of insult — you appear to be calling them stupid, irrational, deluded, or the like. If you tell someone they’re playing on ‘Hard Mode,’ that’s very nearly a compliment, which makes your advice that they change behaviors a lot likelier to go over well.
5. ‘Hard Mode’ doesn’t risk bringing to mind (e.g., gendered) stereotypes about communities of political activists being dumb, irrational, or overemotional.
6. ‘Hard Mode’ encourages a growth mindset. Maybe some topics are too hard to ever be discussed. Even so, ranking topics by difficulty encourages an approach where you try to do better, rather than merely withdrawing. It may be wise to eschew politics, but we should not fear it. (Fear is the mind-killer.)
7. Edit: One of the larger engines of conflict is that people are so much worse at noticing their own faults and biases than noticing others'. People will be relatively quick to dismiss others as 'mindkilled,' while frequently flinching away from or just-not-thinking 'maybe I'm a bit mindkilled about this.' Framing the problem as a challenge rather than as a failing might make it easier to be reflective and even-handed.
This is not an attempt to get more people to talk about politics. I think this is a better framing whether or not you trust others (or yourself) to have productive political conversations.
When I playtested this post, Ciphergoth raised the worry that 'hard mode' isn't scary-sounding enough. As dire warnings go, it's light-hearted—exciting, even. To which I say: good. Counter-intuitive fears should usually be argued into people (e.g., via Eliezer's politics sequence), not connotation-ninja'd or chanted at them. The cognitive content is more clearly conveyed by 'hard mode,' and if some group (people who love politics) stands to gain the most from internalizing this message, the message shouldn't cast that very group (people who love politics) in an obviously unflattering light. LW seems fairly memetically stable, so the main issue is what would make this meme infect friends and acquaintances who haven't read the sequences. (Or Dune.)
If you just want a scary personal mantra to remind yourself of the risks, I propose 'politics is SPIDERS'. Though 'politics is the mind-killer' is fine there too.
If you and your co-conversationalists haven’t yet built up a lot of trust and rapport, or if tempers are already flaring, conveying the message ‘I’m too rational to discuss politics’ or ‘You’re too irrational to discuss politics’ can make things worse. In that context, ‘politics is the mind-killer’ is the mind-killer. At least, it’s a needlessly mind-killing way of warning people about epistemic hazards.
‘Hard Mode’ lets you speak as the Humble Aspirant rather than the Aloof Superior. Strive to convey: ‘I’m worried I’m too low-level to participate in this discussion; could you have it somewhere else?’ Or: ‘Could we talk about something closer to Easy Mode, so we can level up together?’ More generally: If you’re worried that what you talk about will impact group epistemology, you should be even more worried about how you talk about it.
Note: I have no intention of criticizing the person involved. I admire that (s)he made the "right" decision in the end (in my opinion), and I mention it only as an example we could all learn from. I did request permission to use his/her anecdote here. I'll also use the pronoun "he" when really I mean he/she.
Once Pat says “no,” it’s harder to get to “yes” than if you had never asked.
Crocker's rules has this very clear clause, and we should keep it well in mind:
Note that Crocker's Rules does not mean you can insult people; it means that other people don't have to worry about whether they are insulting you. Crocker's Rules are a discipline, not a privilege. Furthermore, taking advantage of Crocker's Rules does not imply reciprocity. How could it? Crocker's Rules are something you do for yourself, to maximize information received - not something you grit your teeth over and do as a favor.
Recently, a rationalist heard over social media that an acquaintance - a friend-of-a-friend - had found their lost pet. They said it was better than winning a lottery. The rationalist responded that unless they'd spent thousands of dollars searching, or posted a large reward, then they're saying something they don't really mean. Then, feeling like a party-pooper and a downer, he deleted his comment.
I believe this was absolutely the correct things to do. As Miss Manners says (http://www.washingtonpost.com/wp-dyn/content/article/2007/02/06/AR2007020601518.html), people will associate unpleasant emotions with the source and the cause. They're not going to say, oh, that's correct; I was mistaken about the value of my pet; thank you for correcting my flawed value system.
Instead they'll say, those rationalists are so heartless, attaching dollar signs to everything. They think they know better. They're rude and stuck up. I don't want to have anything to do with them. And then they'll think walk away with a bad impression of us. (Yes, all of us, for we are a minority now, and each of us reflects upon all of us, the same way a Muslim bomber would reflect poorly on public opinion of all Muslims, while a Christian bomber would not.) In the future they'll be less likely to listen to any one of us.
The only appropriate thing to say in this case is "I'm so happy for you." But that doesn't mean we can't promote ourselves ever. Here are some alternatives.
- At another time, ask for "help" with your own decisions. Go through the process of calculating out all the value and expected values. This is completely non-confrontational, and your friends/acquaintances will not need to defend anything. Whenever they give a suggestion, praise it as being a good idea, and then make a show of weighing the expected value out loud.
- Say "wow, I don't know many people who'd spend that much! Your pet is lucky to have someone like you!" But it must be done without any sarcasm. They might feel a bit uncomfortable taking that much praise. They might go home and mull it over.
- Invite them to "try something you saw online" with you. This thing could be mindcharting, the estimation game, learning quantum physics, meditation, goal refactoring, anything. Emphasize the curiosity/exploring aspect. See if it leads into a conversation about rationality. Don't mention the incident with the pet - it could come off as criticism.
- At a later date, introduce them to Methods or Rationality. Say it's because "it's funny," or "you have a lot of interesting ideas," or even just "I think you'll like it." That's generally a good starting point. :)
- Let it be. First do no harm.
I was told long ago (in regards to LGBT rights) that minds are not changed by logic or reasoning or facts. They are changed over a long period of time by emotions. For us, that means showing what we believe without pressing it on others, while at the same time being the kind of person you want to be like. If we are successful and happy, if we carry ourselves with kindness and dignity, we'll win over hearts.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
This is a fascinating article with many, many interesting points. I'm excerpting some of them below, but mostly just to get you to read it: if I were to quote everything interesting, I'd have to pretty much copy the entire (long!) article.
Rumors and fiction
[...] A related but perhaps more surprising source of misinformation is literary fiction. People extract knowledge even from sources that are explicitly identified as fictional. This process is often adaptive, because fiction frequently contains valid information about the world. For example, non-Americans’ knowledge of U.S. traditions, sports, climate, and geography partly stems from movies and novels, and many Americans know from movies that Britain and Australia have left-hand traffic. By definition, however, fiction writers are not obliged to stick to the facts, which creates an avenue for the spread of misinformation, even by stories that are explicitly identified as fictional. A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011). Few people would be so alert and mindful when reading fiction for enjoyment. These links between fiction and incorrect knowledge are particularly concerning when popular fiction pretends to accurately portray science but fails to do so, as was the case with Michael Crichton’s novel State of Fear. The novel misrepresented the science of global climate change but was nevertheless introduced as “scientific” evidence into a U.S. Senate committee (Allen, 2005; Leggett, 2005).
Writers of fiction are expected to depart from reality, but in other instances, misinformation is manufactured intentionally. There is considerable peer-reviewed evidence pointing to the fact that misinformation can be intentionally or carelessly disseminated, often for political ends or in the service of vested interests, but also through routine processes employed by the media. [...]
Assessing the Truth of a Statement: Recipients’ Strategies
Misleading information rarely comes with a warning label. People usually cannot recognize that a piece of information is incorrect until they receive a correction or retraction. For better or worse, the acceptance of information as true is favored by tacit norms of everyday conversational conduct: Information relayed in conversation comes with a “guarantee of relevance” (Sperber & Wilson, 1986), and listeners proceed on the assumption that speakers try to be truthful, relevant, and clear, unless evidence to the contrary calls this default into question (Grice, 1975; Schwarz, 1994, 1996). Some research has even suggested that to comprehend a statement, people must at least temporarily accept it as true (Gilbert, 1991). On this view, belief is an inevitable consequence of—or, indeed, precursor to—comprehension.
Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in." [...]
Is the information compatible with what I believe?
As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974). People assess the logical compatibility of the information with other facts and beliefs. Once a new piece of knowledge-consistent information has been accepted, it is highly resistant to change, and the more so the larger the compatible knowledge base is. From a judgment perspective, this resistance derives from the large amount of supporting evidence (Wyer, 1974); from a cognitive-consistency perspective (Festinger, 1957), it derives from the numerous downstream inconsistencies that would arise from rejecting the prior information as false. Accordingly, compatibility with other knowledge increases the likelihood that misleading information will be accepted, and decreases the likelihood that it will be successfully corrected.
When people encounter a piece of information, they can check it against other knowledge to assess its compatibility. This process is effortful, and it requires motivation and cognitive resources. A less demanding indicator of compatibility is provided by one’s meta-cognitive experience and affective response to new information. Many theories of cognitive consistency converge on the assumption that information that is inconsistent with one’s beliefs elicits negative feelings (Festinger, 1957). Messages that are inconsistent with one’s beliefs are also processed less fluently than messages that are consistent with one’s beliefs (Winkielman, Huber, Kavanagh, & Schwarz, 2012). In general, fluently processed information feels more familiar and is more likely to be accepted as true; conversely, disfluency elicits the impression that something doesn’t quite “feel right” and prompts closer scrutiny of the message (Schwarz et al., 2007; Song & Schwarz, 2008). This phenomenon is observed even when the fluent processing of a message merely results from superficial characteristics of its presentation. For example, the same statement is more likely to be judged as true when it is printed in high rather than low color contrast (Reber & Schwarz, 1999), presented in a rhyming rather than nonrhyming form (McGlone & Tofighbakhsh, 2000), or delivered in a familiar rather than unfamiliar accent (Levy-Ari & Keysar, 2010). Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font (Song & Schwarz, 2008).
As a result, analytic as well as intuitive processing favors the acceptance of messages that are compatible with a recipient’s preexisting beliefs: The message contains no elements that contradict current knowledge, is easy to process, and “feels right.”
Is the story coherent?
Whether a given piece of information will be accepted as true also depends on how well it fits a broader story that lends sense and coherence to its individual elements. People are particularly likely to use an assessment strategy based on this principle when the meaning of one piece of information cannot be assessed in isolation because it depends on other, related pieces; use of this strategy has been observed in basic research on mental models (for a review, see Johnson-Laird, 2012), as well as extensive analyses of juries’ decision making (Pennington & Hastie, 1992, 1993).
A story is compelling to the extent that it organizes information without internal contradictions in a way that is compatible with common assumptions about human motivation and behavior. Good stories are easily remembered, and gaps are filled with story-consistent intrusions. Once a coherent story has been formed, it is highly resistant to change: Within the story, each element is supported by the fit of other elements, and any alteration of an element may be made implausible by the downstream inconsistencies it would cause. Coherent stories are easier to process than incoherent stories are (Johnson-Laird, 2012), and people draw on their processing experience when they judge a story’s coherence (Topolinski, 2012), again giving an advantage to material that is easy to process. [...]
Is the information from a credible source?
[...] People’s evaluation of a source’s credibility can be based on declarative information, as in the above examples, as well as experiential information. The mere repetition of an unknown name can cause it to seem familiar, making its bearer “famous overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and hence more credible. Even when a message is rejected at the time of initial exposure, that initial exposure may lend it some familiarity-based credibility if the recipient hears it again.
Do others believe this information?
Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus. But, unfortunately, information can seem familiar for the wrong reason, leading to erroneous perceptions of high consensus. For example, Weaver, Garcia, Schwarz, and Miller (2007) exposed participants to multiple iterations of the same statement, provided by the same communicator. When later asked to estimate how widely the conveyed belief is shared, participants estimated consensus to be greater the more often they had read the identical statement from the same, single source. In a very real sense, a single repetitive voice can sound like a chorus. [...]
The extent of pluralistic ignorance (or of the false-consensus effect) can be quite striking: In Australia, people with particularly negative attitudes toward Aboriginal Australians or asylum seekers have been found to overestimate public support for their attitudes by 67% and 80%, respectively (Pedersen, Griffiths, & Watt, 2008). Specifically, although only 1.8% of people in a sample of Australians were found to hold strongly negative attitudes toward Aboriginals, those few individuals thought that 69% of all Australians (and 79% of their friends) shared their fringe beliefs. This represents an extreme case of the false-consensus effect. [...]
The Continued Influence Effect: Retractions Fail to Eliminate the Influence of Misinformation
We first consider the cognitive parameters of credible retractions in neutral scenarios, in which people have no inherent reason or motivation to believe one version of events over another. Research on this topic was stimulated by a paradigm pioneered by Wilkes and Leatherbarrow (1988) and H. M. Johnson and Seifert (1994). In it, people are presented with a fictitious report about an event unfolding over time. The report contains a target piece of information: For some readers, this target information is subsequently retracted, whereas for readers in a control condition, no correction occurs. Participants’ understanding of the event is then assessed with a questionnaire, and the number of clear and uncontroverted references to the target (mis-)information in their responses is tallied.
A stimulus narrative commonly used in this paradigm involves a warehouse fire that is initially thought to have been caused by gas cylinders and oil paints that were negligently stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang, 2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Some participants are then presented with a retraction, such as “the closet was actually empty.” A comprehension test follows, and participants’ number of references to the gas and paint in response to indirect inference questions about the event (e.g., “What caused the black smoke?”) is counted. In addition, participants are asked to recall some basic facts about the event and to indicate whether they noticed any retraction.
Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994).
When misinformation is presented through media sources, the remedy is the presentation of a correction, often in a temporally disjointed format (e.g., if an error appears in a newspaper, the correction will be printed in a subsequent edition). In laboratory studies, misinformation is often retracted immediately and within the same narrative (H. M. Johnson & Seifert, 1994). Despite this temporal and contextual proximity to the misinformation, retractions are ineffective. More recent studies (Seifert, 2002) have examined whether clarifying the correction (minimizing misunderstanding) might reduce the continued influence effect. In these studies, the correction was thus strengthened to include the phrase “paint and gas were never on the premises.” Results showed that this enhanced negation of the presence of flammable materials backfired, making people even more likely to rely on the misinformation in their responses. Other additions to the correction were found to mitigate to a degree, but not eliminate, the continued influence effect: For example, when participants were given a rationale for how the misinformation originated, such as, “a truckers’ strike prevented the expected delivery of the items,” they were somewhat less likely to make references to it. Even so, the influence of the misinformation could still be detected. The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it.
Multiple explanations have been proposed for the continued influence effect. We summarize their key assumptions next. [...]
Concise recommendations for practitioners
[...] We summarize the main points from the literature in Figure 1 and in the following list of recommendations:
Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.
Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.
To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.
Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.
Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.
Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.
If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.
You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.
Humans obviously can't communicate.
Miscommunication is something we talk about Less Wrong a lot, what with the illusion of transparency, the double illusion of transparency, and the 37 Ways Words Can Be Wrong sequence and better disagreement and levels of communication and mental metadata and this (not to mention Robin Hanson's Disagreement is Near/Far Bias). I thought about writing a new top-level post about it with some of the links I've found, but I figure they say all I could have said.
Here you go:
- The Curse of Knowledge at Measure of Doubt (mostly a rephrasing of what we know from the above posts).
- Seven Causes of Disagreement at Spencer Greenberg's blog.
- 4 Reasons Humans Will Never Understand Each Other at Cracked.
- Wiio's Laws. This is the big one. This is the one that says it all. Do read them. Apparently, this doesn't happen to everyone, but after reading that page I started seeing Wiio's laws everywhere.
[crossposted at Measure of Doubt]
What is the Curse of Knowledge, and how does it apply to science education, persuasion, and communication? No, it's not a reference to the Garden of Eden story. I'm referring to a particular psychological phenomenon that can make our messages backfire if we're not careful.
Communication isn't a solo activity; it involves both you and the audience. Writing a diary entry is a great way to sort out thoughts, but if you want to be informative and persuasive to others, you need to figure out what they'll understand and be persuaded by. A common habit is to use ourselves as a mental model - assuming that everyone else will laugh at what we find funny, agree with what we find convincing, and interpret words the way we use them. The model works to an extent - especially with people similar to us - but other times our efforts fall flat. You can present the best argument you've ever heard, only to have it fall on dumb - sorry, deaf - ears.
That's not necessarily your fault - maybe they're just dense! Maybe the argument is brilliant! But if we want to communicate successfully, pointing fingers and assigning blame is irrelevant. What matters is getting our point across, and we can't do it if we're stuck in our head, unable to see things from our audience's perspective. We need to figure out what words will work.
Unfortunately, that's where the Curse of Knowledge comes in. In 1990, Elizabeth Newton did a fascinating psychology experiment: She paired participants into teams of two: one tapper and one listener. The tappers picked one of 25 well-known songs and would tap out the rhythm on a table. Their partner - the designated listener - was asked to guess the song. How do you think they did?
Not well. Of the 120 songs tapped out on the table, the listeners only guessed 3 of them correctly - a measly 2.5 percent. But get this: before the listeners gave their answer, the tappers were asked to predict how likely their partner was to get it right. Their guess? Tappers thought their partners would get the song 50 percent of the time. You know, only overconfident by a factor of 20. What made the tappers so far off?
They lost perspective because they were "cursed" with the additional knowledge of the song title. Chip and Dan Heath use the story in their book Made to Stick to introduce the term:
"The problem is that tappers have been given knowledge (the song title) that makes it impossible for them to imagine what it's like to lack that knowledge. When they're tapping, they can't imagine what it's like for the listeners to hear isolated taps rather than a song. This is the Curse of Knowledge. Once we know something, we find it hard to imagine what it was like not to know it. Our knowledge has "cursed" us. And it becomes difficult or us to share our knowledge with others, because we can't readily re-create our listeners' state of mind."
So it goes with communicating complex information. Because we have all the background knowledge and understanding, we're overconfident that what we're saying is clear to everyone else. WE know what we mean! Why don't they get it? It's tough to remember that other people won't make the same inferences, have the same word-meaning connections, or share our associations.
It's particularly important in science education. The more time a person spends in a field, the more the field's obscure language becomes second nature. Without special attention, audiences might not understand the words being used - or worse yet, they might get the wrong impression.
Over at the American Geophysical Union blog, Callan Bentley gives a fantastic list of Terms that have different meanings for scientists and the public.
What great examples! Even though the scientific terms are technically correct in context, they're obviously the wrong ones to use when talking to the public about climate change. An inattentive scientist could know all the material but leave the audience walking away with the wrong message.
We need to spend the effort to phrase ideas in a way the audience will understand. Is that the same as "dumbing down" a message? After all, complicated ideas require complicated words and nuanced answers, right? Well, no. A real expert on a topic can give a simple distillation of material, identifying the core of the issue. Bentley did an outstanding job rephrasing technical, scientific terms in a way that conveys the intended message to the public.
That's not dumbing things down, it's showing a mastery of the concepts. And he was able to do it by overcoming the "curse of knowledge," seeing the issue from other people's perspective. Kudos to him - it's an essential part of science education, and something I really admire.
P.S. - By the way, I chose that image for a reason: I bet once you see the baby in the tree you won’t be able to ‘unsee’ it. (image via Richard Wiseman)
If our morality is complex and directly tied to what's human—if we're seeking to avoid building paperclip maximizers—how do you judge and quantify the danger in training yourself to become more rational if it should drift from being more human?
My friend is a skeptical theist. She, for instance, scoffs mightily at Camping's little dilemma/psychosis but then argues from a position of comfort that Rapture it's a silly thing to predict because it's clearly stated that no one will know the day. And then she gives me a confused look because the psychological dissonance is clear.
On one hand, my friend is in a prime position to take forward steps to self-examination and holding rational belief systems. On the other hand, she's an opera singer whose passion and profession require her to be able to empathize with and explore highly irrational human experiences. Since rationality is the art of winning, nobody can deny that the option that lets you have your cake and eat it too is best, but how do you navigate such a narrows?
In another example, a recent comment thread suggested the dangers of embracing human tendencies: catharsis might lead to promoting further emotional intensity. At the same time, catharsis is a well appreciated human communication strategy with roots in Greek stage. If rational action pulls you away from humanity, away from our complex morality, then how do we judge it worth doing?
The most immediate resolution to this conundrum appears to me to be that human morality has no consistency constraint: we can want to be powerful and able to win while also want to retain our human tendencies which directly impinge on that goal. Is there a theory of metamorality which allows you to infer how such tradeoffs should be managed? Or is human morality, as a program, flawed with inconsistencies that lead to inescapable cognitive dissonance and dehumanization? If you interpret morality as a self-supporting strange loop, is it possible to have unresolvable, drifting interpretations based on how you focus you attentions?
Dual to the problem of resolving a way forward is the problem of the interpreter. If there is a goal to at least marginally increase the rationality of humanity, but in order to discover the means to do so you have to become less capable of empathizing with and communicating with humanity, who acts as an interpreter between the two divergent mindsets?
Allow me to propose a thought experiment. Suppose you, and you alone, were to make first contact with an alien species. Since your survival and the survival of the entire human race may depend on the extraterrestrials recognizing you as a member of a rational species, how would you convey your knowledge of mathematics, logic, and the scientific method to them using only your personal knowledge and whatever tools you might reasonably have on your person on an average day?
When I thought of this question, the two methods that immediately came to mind were the Pythagorean Theorem and prime number sequences. For instance, I could draw a rough right triangle and label one side with three dots, the other with four, and the hypotenuse with five. However, I realized that these are fairly primitive maths. After all, the ancient Greeks knew of them, and yet had no concept of the scientific method. Would these likely be sufficient, and if not what would be? Could you make a rough sketch of the first few atoms on the periodic table or other such universal phenomena so that it would be generally recognizable? Could you convey a proof of rationality in a manner that even aliens who cannot hear human vocalizations, or see in a completely different part of the EM spectrum? Is it even in principle possible to express rationality without a common linguistic grounding?
In other words, what is the most rational thought you could convey without the benefit of common language, culture, psychology, or biology, and how would you do it?
Bonus point: Could you convey Bayes' theorem to said ET?