Banish the Clippy-creating Bias Demon!
I posted in Practical Ethics, arguing that if we mentally anthropomorphised certain risks, then we'd be more likely to give them the attention they deserved. Slaying the Cardiovascular Vampire, defeating the Parasitic Diseases Death Cult, and banishing the Demon of Infection... these stories give a mental picture of the actual good we're doing when combating these issues, and the bad we're doing by ignoring them. Imagine a politician proclaiming:
- I will not let the Cardiovascular Vampire continue his unrelenting war upon the American people, slaying over a third of our citizens - the eldest, in their weakened state, among his most numerous victims. There is no negotiating with such a terrorist - I will direct the full resources of the state to crushing his campaign of destruction.
An amusing thing to contemplate - except, of course, if there were a real Cardiovascular Vampire, politicians and pundits would be falling over themselves with those kinds of announcements.
The field of AI is already over-saturated with anthropomorphisation, so we definitely shouldn't be imagining Clippy as some human-like entity that we can heroically combat, with all the rules of narrative applying. Still it can't hurt to dream up a hideous Bias Demon in its mishaped (though superficially plausible) lair, cackling in glee as someone foolishly attempts to implement an AI design without the proper safety precautions, smiling serenely as prominent futurist dismiss the risk... and dissolving, hit by the holy water of increased rationality and proper AI research. Those images might help us make the right emotional connection to what we're achieving here.
The "Friendship is Witchcraft" expectation test
My mother won't watch animated movies. It doesn't matter what the content is. Whether it's Sponge Bob or Grave of the Fireflies, she believes that animation is used only for shows for children, and that adults shouldn't watch shows for children. She's incapable of changing this belief, because even if I somehow convince her to sit and watch an animated film, she sees what she expects, not what's in front of her.
I think this is the same thing that creation scientists and climate-change deniers do. They literally cannot perceive what is in front of them, because they are already convinced they know what it is.
Here's an interesting test, which I discovered by accident: There's a hilarious series of fan-made parodies of My Little Pony: Friendship is Magic on YouTube called Friendship is Witchcraft. They took show videos and redubbed them to have different stories in which various ponies are robots, fascists, or cult members planning to awaken Cthulhu. I've shown these videos to four people without explanation, just saying "You've got to see this!" and bringing up "Cute From the Hip" on YouTube.
The same thing always happens. They watch with stony, I-must-be-polite-to-Phil faces, without laughing. Eventually I realize that they think they're watching an episode of My Little Pony. I explain that it's a parody, and they say, "Oh!" I'd think that lines like "I know we've taught you to laugh in the face of death," "If you think one of your friends is a robot, kids, report them to the authorities so that they can be destroyed!", "I'm covered in pig's blood!", or, "Are you busy Friday? We need a willing victim for our ritual sacrifice" would prompt some questions. They don't. They are so determined to see a TV show for little girls that that's what they see, regardless of what's in front of them.
[Link] Economists' views differ by gender
Edit: ParagonProtege has provided a link to the original study. Thank you! (^_^)
A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
What does an economist think of that?
A lot depends on whether the economist is a man or a woman. A new study shows a large gender gap on economic policy among the nation's professional economists, a divide similar -- and in some cases bigger -- than the gender divide found in the general public.
Differences extend to core professional beliefs -- such as the effect of minimum wage laws -- not just matters of political opinion.
Female economists tend to favor a bigger role for government while male economists have greater faith in business and the marketplace. Is the U.S. economy excessively regulated? Sixty-five percent of female economists said "no" -- 24 percentage points higher than male economists.
Can this be reasonably explained by self-interest? Female and male economists' views are probably coloured by gender solidarity. Government jobs may be more likeable to women than men because of their recorded greater risk aversion. Regardless of the reason government jobs are more important for women than for men. Also in the US where the study was done middle class white women benefit quit a bit from affirmative action in government hiring.
"As a group, we are pro-market," says Ann Mari May, co-author of the study and a University of Nebraska economist. "But women are more likely to accept government regulation and involvement in economic activity than our male colleagues."
Opinion differences between men and women are well-documented in the general public. President Obama leads Mitt Romney by 10 percentage points among women. Romney leads Obama by 3 percentage points among men, according to the latest Gallup Poll.
Politics is the mind-killer probably does play a role in explaining the difference.
The survey of 400 economists is one of the first to examine whether gender differences matter within a profession. The answer for economists: Yes.
How economists think:
- Health insurance. Female economists thought employers should be required to provide health insurance for full-time workers: 40% in favor to 37% against, with the rest offering no opinion. By contrast, men were strongly against the idea: 21% in favor and 52% against.
- Education. Females narrowly opposed taxpayer-funded vouchers that parents could use for tuition at a public or private school of their choice. Male economists love the idea: 61% to 14%.
- Labor standards. Females believe 48% to 33% that trade policy should be linked to labor standards in foreign counties. Males disagreed: 60% to 23%.
First two points are somewhat congruent with stereotypes. Anyone who has run into the frequent iSteve commenter "Whiskey" will probably note that the third point indicates women may not hate hate HATE lower and middle class beta males in this case.
"It's very puzzling," says free-market economist Veronique de Rugy of the Mercatus Center at George Mason University in Fairfax, Va. "Not a day goes by that I don't ask myself why there are so few women economists on the free-market side."
A native of France, de Rugy supported government intervention early in her life but changed her mind after studying economics. "We want many of the same things as liberals -- less poverty, more health care -- but have radically different ideas on how to achieve it."
This seems plausible since politics is about applause lights after all, the tribes are what matters not the particular shape of their attire. But might value differences still be behind the gender difference? Maybe some failed utopias I recall reading aren't really failed.
Liberal economist Dean Baker, co-founder of the Center for Economic Policy and Research, says male economists have been on the inside of the profession, confirming each other's anti-regulation views. Women, as outsiders, "are more likely to think independently or at least see people outside of the economics profession as forming their peer group," he says.
The gender balance in economics is changing. One-third of economics doctorates now go to women. The chair of the White House Council of Economic Advisers has been a woman three of 27 times since 1946 -- one advising Obama and two advising Bill Clinton. The Federal Reserve Board of Governors has three women, bringing the total to eight of 90 members since 1914.
"More diversity is needed at the table when public policy is discussed," May says.
Somehow I think this does not include ideological diversity.
Economists do agree on some things. Female economists agree with men that Europe has too much regulation and that Walmart is good for society. Male economists agree with their female colleagues that military spending is too high.
The genders are most divorced from each other on the question of equality for women. Male economists overwhelmingly think the wage gap between men and women is largely the result of individuals' skills, experience and voluntary choices. Female economists overwhelmingly disagree by a margin of 4-to-1.
The biggest disagreement: 76% of women say faculty opportunities in economics favor men. Male economists point the opposite way: 80% say women are favored or the process is neutral.
No mystery here. (^_^)
UFAI cannot be the Great Filter
[Summary: The fact we do not observe (and have not been wiped out by) an UFAI suggests the main component of the 'great filter' cannot be civilizations like ours being wiped out by UFAI. Gentle introduction (assuming no knowledge) and links to much better discussion below.]
Introduction
The Great Filter is the idea that although there is lots of matter, we observe no "expanding, lasting life", like space-faring intelligences. So there is some filter through which almost all matter gets stuck before becoming expanding, lasting life. One question for those interested in the future of humankind is whether we have already 'passed' the bulk of the filter, or does it still lie ahead? For example, is it very unlikely matter will be able to form self-replicating units, but once it clears that hurdle becoming intelligent and going across the stars is highly likely; or is it getting to a humankind level of development is not that unlikely, but very few of those civilizations progress to expanding across the stars. If the latter, that motivates a concern for working out what the forthcoming filter(s) are, and trying to get past them.
One concern is that advancing technology gives the possibility of civilizations wiping themselves out, and it is this that is the main component of the Great Filter - one we are going to be approaching soon. There are several candidates for which technology will be an existential threat (nanotechnology/'Grey goo', nuclear holocaust, runaway climate change), but one that looms large is Artificial intelligence (AI), and trying to understand and mitigate the existential threat from AI is the main role of the Singularity Institute, and I guess Luke, Eliezer (and lots of folks on LW) consider AI the main existential threat.
The concern with AI is something like this:
- AI will soon greatly surpass us in intelligence in all domains.
- If this happens, AI will rapidly supplant humans as the dominant force on planet earth.
- Almost all AIs, even ones we create with the intent to be benevolent, will probably be unfriendly to human flourishing.
Or, as summarized by Luke:
... AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good. (More on this option in the next post.)
So, the aim of the game needs to be trying to work out how to control the future intelligence explosion so the vastly smarter-than-human AIs are 'friendly' (FAI) and make the world better for us, rather than unfriendly AIs (UFAI) which end up optimizing the world for something that sucks.
'Where is everybody?'
So, topic. I read this post by Robin Hanson which had a really good parenthetical remark (emphasis mine):
Yes, it is possible that the extremely difficultly was life’s origin, or some early step, so that, other than here on Earth, all life in the universe is stuck before this early extremely hard step. But even if you find this the most likely outcome, surely given our ignorance you must also place a non-trivial probability on other possibilities. You must see a great filter as lying between initial planets and expanding civilizations, and wonder how far along that filter we are. In particular, you must estimate a substantial chance of “disaster”, i.e., something destroying our ability or inclination to make a visible use of the vast resources we see. (And this disaster can’t be an unfriendly super-AI, because that should be visible.)
This made me realize an UFAI should also be counted as an 'expanding lasting life', and should be deemed unlikely by the Great Filter.
Another way of looking at it: if the Great Filter still lies ahead of us, and a major component of this forthcoming filter is the threat from UFAI, we should expect to see the UFAIs of other civilizations spreading across the universe (or not see anything at all, because they would wipe us out to optimize for their unfriendly ends). That we do not observe it disconfirms this conjunction.
[Edit/Elaboration: It also gives a stronger argument - as the UFAI is the 'expanding life' we do not see, the beliefs, 'the Great Filter lies ahead' and 'UFAI is a major existential risk' lie opposed to one another: the higher your credence in the filter being ahead, the lower your credence should be in UFAI being a major existential risk (as the many civilizations like ours that go on to get caught in the filter do not produce expanding UFAIs, so expanding UFAI cannot be the main x-risk); conversely, if you are confident that UFAI is the main existential risk, then you should think the bulk of the filter is behind us (as we don't see any UFAIs, there cannot be many civilizations like ours in the first place, as we are quite likely to realize an expanding UFAI).]
A much more in-depth article and comments (both highly recommended) was made by Katja Grace a couple of years ago. I can't seem to find a similar discussion on here (feel free to downvote and link in the comments if I missed it), which surprises me: I'm not bright enough to figure out the anthropics, and obviously one may hold AI to be a big deal for other-than-Great-Filter reasons (maybe a given planet has a 1 in a googol chance of getting to intelligent life, but intelligent life 'merely' has a 1 in 10 chance of successfully navigating an intelligence explosion), but this would seem to be substantial evidence driving down the proportion of x-risk we should attribute to AI.
What do you guys think?
[Link] Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show
Here is a paper in PLOS Biology re-considering the lessons of some classic psychology experiments invoked here often (via).
Contesting the “Nature” Of Conformity: What Milgram and Zimbardo's Studies Really Show
To me the crux of the paper comes from this statement in the abstract:
This suggests that individuals' willingness to follow authorities is conditional on identification with the authority in question and an associated belief that the authority is right.
Plus this detail from the Milgram experiment:
Ultimately, they tend to go along with the Experimenter if he justifies their actions in terms of the scientific benefits of the study (as he does with the prod “The experiment requires that you continue”) [39]. But if he gives them a direct order (“You have no other choice, you must go on”) participants typically refuse. Once again, received wisdom proves questionable. The Milgram studies seem to be less about people blindly conforming to orders than about getting people to believe in the importance of what they are doing [40].
[link] Misinformation and Its Correction: Continued Influence and Successful Debiasing
http://psi.sagepub.com/content/13/3/106.full
Abstract.
The widespread prevalence and persistence of misinformation in contemporary societies, such as the false belief that there is a link between childhood vaccinations and autism, is a matter of public concern. For example, the myths surrounding vaccinations, which prompted some parents to withhold immunization from their children, have led to a marked increase in vaccine-preventable disease, as well as unnecessary public expenditure on research and public-information campaigns aimed at rectifying the situation.
We first examine the mechanisms by which such misinformation is disseminated in society, both inadvertently and purposely. Misinformation can originate from rumors but also from works of fiction, governments and politicians, and vested interests. Moreover, changes in the media landscape, including the arrival of the Internet, have fundamentally influenced the ways in which information is communicated and misinformation is spread.
We next move to misinformation at the level of the individual, and review the cognitive factors that often render misinformation resistant to correction. We consider how people assess the truth of statements and what makes people believe certain things but not others. We look at people’s memory for misinformation and answer the questions of why retractions of misinformation are so ineffective in memory updating and why efforts to retract misinformation can even backfire and, ironically, increase misbelief. Though ideology and personal worldviews can be major obstacles for debiasing, there nonetheless are a number of effective techniques for reducing the impact of misinformation, and we pay special attention to these factors that aid in debiasing.
We conclude by providing specific recommendations for the debunking of misinformation. These recommendations pertain to the ways in which corrections should be designed, structured, and applied in order to maximize their impact. Grounded in cognitive psychological theory, these recommendations may help practitioners—including journalists, health professionals, educators, and science communicators—design effective misinformation retractions, educational tools, and public-information campaigns.
This is a fascinating article with many, many interesting points. I'm excerpting some of them below, but mostly just to get you to read it: if I were to quote everything interesting, I'd have to pretty much copy the entire (long!) article.
Rumors and fiction
[...] A related but perhaps more surprising source of misinformation is literary fiction. People extract knowledge even from sources that are explicitly identified as fictional. This process is often adaptive, because fiction frequently contains valid information about the world. For example, non-Americans’ knowledge of U.S. traditions, sports, climate, and geography partly stems from movies and novels, and many Americans know from movies that Britain and Australia have left-hand traffic. By definition, however, fiction writers are not obliged to stick to the facts, which creates an avenue for the spread of misinformation, even by stories that are explicitly identified as fictional. A study by Marsh, Meade, and Roediger (2003) showed that people relied on misinformation acquired from clearly fictitious stories to respond to later quiz questions, even when these pieces of misinformation contradicted common knowledge. In most cases, source attribution was intact, so people were aware that their answers to the quiz questions were based on information from the stories, but reading the stories also increased people’s illusory belief of prior knowledge. In other words, encountering misinformation in a fictional context led people to assume they had known it all along and to integrate this misinformation with their prior knowledge (Marsh & Fazio, 2006; Marsh et al., 2003).
The effects of fictional misinformation have been shown to be stable and difficult to eliminate. Marsh and Fazio (2006) reported that prior warnings were ineffective in reducing the acquisition of misinformation from fiction, and that acquisition was only reduced (not eliminated) under conditions of active on-line monitoring—when participants were instructed to actively monitor the contents of what they were reading and to press a key every time they encountered a piece of misinformation (see also Eslick, Fazio, & Marsh, 2011). Few people would be so alert and mindful when reading fiction for enjoyment. These links between fiction and incorrect knowledge are particularly concerning when popular fiction pretends to accurately portray science but fails to do so, as was the case with Michael Crichton’s novel State of Fear. The novel misrepresented the science of global climate change but was nevertheless introduced as “scientific” evidence into a U.S. Senate committee (Allen, 2005; Leggett, 2005).
Writers of fiction are expected to depart from reality, but in other instances, misinformation is manufactured intentionally. There is considerable peer-reviewed evidence pointing to the fact that misinformation can be intentionally or carelessly disseminated, often for political ends or in the service of vested interests, but also through routine processes employed by the media. [...]
Assessing the Truth of a Statement: Recipients’ Strategies
Misleading information rarely comes with a warning label. People usually cannot recognize that a piece of information is incorrect until they receive a correction or retraction. For better or worse, the acceptance of information as true is favored by tacit norms of everyday conversational conduct: Information relayed in conversation comes with a “guarantee of relevance” (Sperber & Wilson, 1986), and listeners proceed on the assumption that speakers try to be truthful, relevant, and clear, unless evidence to the contrary calls this default into question (Grice, 1975; Schwarz, 1994, 1996). Some research has even suggested that to comprehend a statement, people must at least temporarily accept it as true (Gilbert, 1991). On this view, belief is an inevitable consequence of—or, indeed, precursor to—comprehension.
Although suspension of belief is possible (Hasson, Simmons, & Todorov, 2005; Schul, Mayo, & Burnstein, 2008), it seems to require a high degree of attention, considerable implausibility of the message, or high levels of distrust at the time the message is received. So, in most situations, the deck is stacked in favor of accepting information rather than rejecting it, provided there are no salient markers that call the speaker’s intention of cooperative conversation into question. Going beyond this default of acceptance requires additional motivation and cognitive resources: If the topic is not very important to you, or you have other things on your mind, misinformation will likely slip in." [...]Is the information compatible with what I believe?
As numerous studies in the literature on social judgment and persuasion have shown, information is more likely to be accepted by people when it is consistent with other things they assume to be true (for reviews, see McGuire, 1972; Wyer, 1974). People assess the logical compatibility of the information with other facts and beliefs. Once a new piece of knowledge-consistent information has been accepted, it is highly resistant to change, and the more so the larger the compatible knowledge base is. From a judgment perspective, this resistance derives from the large amount of supporting evidence (Wyer, 1974); from a cognitive-consistency perspective (Festinger, 1957), it derives from the numerous downstream inconsistencies that would arise from rejecting the prior information as false. Accordingly, compatibility with other knowledge increases the likelihood that misleading information will be accepted, and decreases the likelihood that it will be successfully corrected.
When people encounter a piece of information, they can check it against other knowledge to assess its compatibility. This process is effortful, and it requires motivation and cognitive resources. A less demanding indicator of compatibility is provided by one’s meta-cognitive experience and affective response to new information. Many theories of cognitive consistency converge on the assumption that information that is inconsistent with one’s beliefs elicits negative feelings (Festinger, 1957). Messages that are inconsistent with one’s beliefs are also processed less fluently than messages that are consistent with one’s beliefs (Winkielman, Huber, Kavanagh, & Schwarz, 2012). In general, fluently processed information feels more familiar and is more likely to be accepted as true; conversely, disfluency elicits the impression that something doesn’t quite “feel right” and prompts closer scrutiny of the message (Schwarz et al., 2007; Song & Schwarz, 2008). This phenomenon is observed even when the fluent processing of a message merely results from superficial characteristics of its presentation. For example, the same statement is more likely to be judged as true when it is printed in high rather than low color contrast (Reber & Schwarz, 1999), presented in a rhyming rather than nonrhyming form (McGlone & Tofighbakhsh, 2000), or delivered in a familiar rather than unfamiliar accent (Levy-Ari & Keysar, 2010). Moreover, misleading questions are less likely to be recognized as such when printed in an easy-to-read font (Song & Schwarz, 2008).
As a result, analytic as well as intuitive processing favors the acceptance of messages that are compatible with a recipient’s preexisting beliefs: The message contains no elements that contradict current knowledge, is easy to process, and “feels right.”
Is the story coherent?
Whether a given piece of information will be accepted as true also depends on how well it fits a broader story that lends sense and coherence to its individual elements. People are particularly likely to use an assessment strategy based on this principle when the meaning of one piece of information cannot be assessed in isolation because it depends on other, related pieces; use of this strategy has been observed in basic research on mental models (for a review, see Johnson-Laird, 2012), as well as extensive analyses of juries’ decision making (Pennington & Hastie, 1992, 1993).
A story is compelling to the extent that it organizes information without internal contradictions in a way that is compatible with common assumptions about human motivation and behavior. Good stories are easily remembered, and gaps are filled with story-consistent intrusions. Once a coherent story has been formed, it is highly resistant to change: Within the story, each element is supported by the fit of other elements, and any alteration of an element may be made implausible by the downstream inconsistencies it would cause. Coherent stories are easier to process than incoherent stories are (Johnson-Laird, 2012), and people draw on their processing experience when they judge a story’s coherence (Topolinski, 2012), again giving an advantage to material that is easy to process. [...]Is the information from a credible source?
[...] People’s evaluation of a source’s credibility can be based on declarative information, as in the above examples, as well as experiential information. The mere repetition of an unknown name can cause it to seem familiar, making its bearer “famous overnight” (Jacoby, Kelley, Brown, & Jaseschko, 1989)—and hence more credible. Even when a message is rejected at the time of initial exposure, that initial exposure may lend it some familiarity-based credibility if the recipient hears it again.
Do others believe this information?
Repeated exposure to a statement is known to increase its acceptance as true (e.g., Begg, Anas, & Farinacci, 1992; Hasher, Goldstein, & Toppino, 1977). In a classic study of rumor transmission, Allport and Lepkin (1945) observed that the strongest predictor of belief in wartime rumors was simple repetition. Repetition effects may create a perceived social consensus even when no consensus exists. Festinger (1954) referred to social consensus as a “secondary reality test”: If many people believe a piece of information, there’s probably something to it. Because people are more frequently exposed to widely shared beliefs than to highly idiosyncratic ones, the familiarity of a belief is often a valid indicator of social consensus. But, unfortunately, information can seem familiar for the wrong reason, leading to erroneous perceptions of high consensus. For example, Weaver, Garcia, Schwarz, and Miller (2007) exposed participants to multiple iterations of the same statement, provided by the same communicator. When later asked to estimate how widely the conveyed belief is shared, participants estimated consensus to be greater the more often they had read the identical statement from the same, single source. In a very real sense, a single repetitive voice can sound like a chorus. [...]
The extent of pluralistic ignorance (or of the false-consensus effect) can be quite striking: In Australia, people with particularly negative attitudes toward Aboriginal Australians or asylum seekers have been found to overestimate public support for their attitudes by 67% and 80%, respectively (Pedersen, Griffiths, & Watt, 2008). Specifically, although only 1.8% of people in a sample of Australians were found to hold strongly negative attitudes toward Aboriginals, those few individuals thought that 69% of all Australians (and 79% of their friends) shared their fringe beliefs. This represents an extreme case of the false-consensus effect. [...]
The Continued Influence Effect: Retractions Fail to Eliminate the Influence of Misinformation
We first consider the cognitive parameters of credible retractions in neutral scenarios, in which people have no inherent reason or motivation to believe one version of events over another. Research on this topic was stimulated by a paradigm pioneered by Wilkes and Leatherbarrow (1988) and H. M. Johnson and Seifert (1994). In it, people are presented with a fictitious report about an event unfolding over time. The report contains a target piece of information: For some readers, this target information is subsequently retracted, whereas for readers in a control condition, no correction occurs. Participants’ understanding of the event is then assessed with a questionnaire, and the number of clear and uncontroverted references to the target (mis-)information in their responses is tallied.
A stimulus narrative commonly used in this paradigm involves a warehouse fire that is initially thought to have been caused by gas cylinders and oil paints that were negligently stored in a closet (e.g., Ecker, Lewandowsky, Swire, & Chang, 2011; H. M. Johnson & Seifert, 1994; Wilkes & Leatherbarrow, 1988). Some participants are then presented with a retraction, such as “the closet was actually empty.” A comprehension test follows, and participants’ number of references to the gas and paint in response to indirect inference questions about the event (e.g., “What caused the black smoke?”) is counted. In addition, participants are asked to recall some basic facts about the event and to indicate whether they noticed any retraction.
Research using this paradigm has consistently found that retractions rarely, if ever, have the intended effect of eliminating reliance on misinformation, even when people believe, understand, and later remember the retraction (e.g., Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011; Ecker, Lewandowsky, & Tang, 2010; Fein, McCloskey, & Tomlinson, 1997; Gilbert, Krull, & Malone, 1990; Gilbert, Tafarodi, & Malone, 1993; H. M. Johnson & Seifert, 1994, 1998, 1999; Schul & Mazursky, 1990; van Oostendorp, 1996; van Oostendorp & Bonebakker, 1999; Wilkes & Leatherbarrow, 1988; Wilkes & Reynolds, 1999). In fact, a retraction will at most halve the number of references to misinformation, even when people acknowledge and demonstrably remember the retraction (Ecker, Lewandowsky, & Apai, 2011; Ecker, Lewandowsky, Swire, & Chang, 2011); in some studies, a retraction did not reduce reliance on misinformation at all (e.g., H. M. Johnson & Seifert, 1994).
When misinformation is presented through media sources, the remedy is the presentation of a correction, often in a temporally disjointed format (e.g., if an error appears in a newspaper, the correction will be printed in a subsequent edition). In laboratory studies, misinformation is often retracted immediately and within the same narrative (H. M. Johnson & Seifert, 1994). Despite this temporal and contextual proximity to the misinformation, retractions are ineffective. More recent studies (Seifert, 2002) have examined whether clarifying the correction (minimizing misunderstanding) might reduce the continued influence effect. In these studies, the correction was thus strengthened to include the phrase “paint and gas were never on the premises.” Results showed that this enhanced negation of the presence of flammable materials backfired, making people even more likely to rely on the misinformation in their responses. Other additions to the correction were found to mitigate to a degree, but not eliminate, the continued influence effect: For example, when participants were given a rationale for how the misinformation originated, such as, “a truckers’ strike prevented the expected delivery of the items,” they were somewhat less likely to make references to it. Even so, the influence of the misinformation could still be detected. The wealth of studies on this phenomenon have documented its pervasive effects, showing that it is extremely difficult to return the beliefs of people who have been exposed to misinformation to a baseline similar to those of people who were never exposed to it.
Multiple explanations have been proposed for the continued influence effect. We summarize their key assumptions next. [...]Concise recommendations for practitioners
[...] We summarize the main points from the literature in Figure 1 and in the following list of recommendations:
Consider what gaps in people’s mental event models are created by debunking and fill them using an alternative explanation.
Use repeated retractions to reduce the influence of misinformation, but note that the risk of a backfire effect increases when the original misinformation is repeated in retractions and thereby rendered more familiar.
To avoid making people more familiar with misinformation (and thus risking a familiarity backfire effect), emphasize the facts you wish to communicate rather than the myth.
Provide an explicit warning before mentioning a myth, to ensure that people are cognitively on guard and less likely to be influenced by the misinformation.
Ensure that your material is simple and brief. Use clear language and graphs where appropriate. If the myth is simpler and more compelling than your debunking, it will be cognitively more attractive, and you will risk an overkill backfire effect.
Consider whether your content may be threatening to the worldview and values of your audience. If so, you risk a worldview backfire effect, which is strongest among those with firmly held beliefs. The most receptive people will be those who are not strongly fixed in their views.
If you must present evidence that is threatening to the audience’s worldview, you may be able to reduce the worldview backfire effect by presenting your content in a worldview-affirming manner (e.g., by focusing on opportunities and potential benefits rather than risks and threats) and/or by encouraging self-affirmation.
You can also circumvent the role of the audience’s worldview by focusing on behavioral techniques, such as the design of choice architectures, rather than overt debiasing.
[LINK] Temporal Binding
I just read an article on Steven Novella's NeurologicaBlog on temporal binding, a cognitive bias I hadn't seen before:
Temporal binding is a phenomenon that reinforces that assumption of cause and effect once we have linked two events causally in our minds. The effect biases our memory so that we remember the apparent cause and effect occurring closer together in time. In experiments we tend to remember the cause as happening later and the effect happening earlier.
Temporal binding is like the reverse of "post hoc ergo propter hoc", and you could perhaps perhaps also call it "propter hoc ergo post hoc".
[Link] Which results from cognitive psychology are robust & real?
A paper on the psychology of religious belief, Paranormal and Religious Believers Are More Prone to Illusory Face Perception than Skeptics and Non-believers, came onto my radar recently. I used to talk a lot about the theory of religious cognitive psychology years ago, but the interest kind of faded when it seemed that empirical results were relatively thin in relation to the system building (Ara Norenzayan’s work being an exception to this generality). The theory is rather straightforward: religious belief is a naturally evoked consequence of the general architecture of our minds. For example, gods are simply extensions of persons, and make natural sense in light of our tendency to anthromorphize the world around us (this may have had evolutionary benefit, in that false positives for detection of other agents was far less costly than false negatives; think an ambush by a rival clan).*
But enough theory. Are religious people cognitively different from those who are atheists? I suspect so. I speak as someone who never ever really believed in God, despite being inculcated in religious ideas from childhood. By the time I was seven years of age I realized that I was an atheist, and that my prior “beliefs” about God were basically analogous to Spinozan Deism. I had simply never believed in a personal God, but for many of earliest years it was less a matter of disbelief, than that did not even comprehend or cogently in my mind elaborate the idea of this entity, which others took for granted as self-evidently obvious. From talking to many other atheists I have come to the conclusion that Atheism is a mental deviance. This does not mean that mental peculiarities are necessary or sufficient for atheism, but they increase the odds.
And yet after reading the above paper my confidence in that theory is reduced. The authors used ~50 individuals, and attempted to correct demographic confounds. Additionally, the results were statistically significant. But to me the above theory should make powerful predictions in terms of effect size. The differences between non-believers, the religious, and those who accepted the paranormal, were just not striking enough for me.
Because of theoretical commitments my prejudiced impulse was to accept these findings. But looking deeply within they just aren’t persuasive in light of my prior expectations. This a fundamental problem in much of social science. Statistical significance is powerful when you have a preference for the hypothesis forwarded. In contrast, the knives of skepticism come out when research is published which goes against your preconceptions.
So a question for psychologists: which results are robust and real, to the point where you would be willing to make a serious monetary bet on it being the orthodoxy in 10 years? My primary interest is cognitive psychology, but I am curious about other fields too.
* In Gods We Trust and Religion Explained are good introductions to this area of research.
Considering the communities heavy reliance on such results I think we should answer the question as well.
Confabulation Bias
(Edit: Gwern points out in the comments that there is previous discussion on this study at New study on choice blindness in moral positions.)
Earlier this month, a group of Swedish scientists published a study that describes a new type of bias that I haven't seen listed in any of the sequences or on the wiki. Their methodology:
We created a self-transforming paper survey of moral opinions, covering both foundational principles, and current dilemmas hotly debated in the media. This survey used a magic trick to expose participants to a reversal of their previously stated attitudes, allowing us to record whether they were prepared to endorse and argue for the opposite view of what they had stated only moments ago.
In other words, people were surveyed on their beliefs and were immediately asked to defend them after finishing the survey. Despite having just written down how they felt, 69% did not even notice that at least one of their answers were surreptitiously changed. Amazingly, a majority of people actually "argued unequivocally for the opposite of their original attitude".
Perhaps this type of effect is already discussed here on LessWrong, but, if so, I have not yet run across any such discussion. (It is not on the LessWrong wiki nor the other wiki, for example.) This appears to be some kind of confabulation bias, where invented positions thrust upon people result in confabulated reasons for believing them.
Some people might object to my calling this a bias. (After all, the experimenters themselves did not use that word.) But I'm trying to refer less to the trick involved in the experiment and more toward the bias this experiment shows that we have toward our own views. This is a fine distinction to make, but I feel it is important for us to recognize.
When I say we prefer our own opinions, this is obvious on its face. Of course we think our own positions are correct; they're the result of our previously reasoned thought. We have reason to believe they are correct. But this study shows that our preference for our own views goes even further than this. We actually are biased toward our own positions to such a degree that we will actually verbally defend them even when we were tricked into thinking we held those positions. This is what I mean when I call it confabulation bias.
Of particular interest to the LessWrong community is the fact that this bias apparently is more susceptible to those of us that are more capable of good argumentation. This puts confabulation bias in the same category as the sophistication effect in that well informed people should take special care to not fall for it. (The idea that confabulation bias is more likely to occur with those of us that argue better is not shown in this study, but it seems like a reasonable hypothesis to make.)
As a final minor point, I just want to point out that the effect did not disappear when the changed opinion was extreme. The options available to participants involved agreeing or disagreeing on a 1-9 scale; a full 31% of respondents who chose an extreme position (like 1 or 9) did not even notice when they were shown to have said the opposite extreme.
[Link] Social Desirability Bias vs. Intelligence Research
From EconLog by Bryan Caplan.
When lies sound better than truth, people tend to lie. That's Social Desirability Bias for you. Take the truth, "Half the population is below the 50th percentile of intelligence." It's unequivocally true - and sounds awful. Nice people don't call others stupid - even privately.
The 2000 American National Election Study elegantly confirms this claim. One of the interviewers' tasks was to rate respondents' "apparent intelligence." Possible answers (reverse coded by me for clarity):
0= Very Low
1= Fairly Low
2= Average
3= Fairly High
4= Very High
Objectively measured intelligence famously fits a bell curve. Subjectively assessed intelligence does not. At all. Check out the ANES distribution.
The ANES is supposed to be a representative national sample. Yet according to interviewers, only 6.1% of respondents are "below average"! The median respondent is "fairly high." Over 20% are "very high." Social Desirability Bias - interviewers' reluctance to impugn anyone's intelligence - practically has to be the explanation.
You could just call this as an amusing curiosity and move on. But wait. Stare at the ANES results for a minute. Savor the data. Question: Are you starting to see the true face of widespread hostility to intelligence research? I sure think I do.
Suppose intelligence research were impeccable. How would psychologically normal humans react? Probably just as they do in the ANES: With denial. How can stupidity be a major cause of personal failure and social ills? Only if the world is full of stupid people. What kind of a person believes the world is full of stupid people? "A realist"? No! A jerk. A big meanie.
My point is not that intelligence research is impeccable. My point, rather, is that hostility to intelligence research is all out of proportion to its flaws - and Social Desirability Bias is the best explanation. Intelligence research tells the world what it doesn't want to hear. It says what people aren't supposed to say. On reflection, the amazing thing isn't that intelligence research has failed to vanquish its angry critics. The amazing thing is that the angry critics have failed to vanquish intelligence research. Everything we've learned about human intelligence is a triumph of mankind's rationality over mankind's Social Desirability Bias.
The Rosenhan Experiment
I haven't seen any links to this on Lesswrong yet, and I just discovered it myself. It's extremely interesting, and has a lot of implications for how the way that people perceive and think of others are largely determined by their environmental context. It's also a fairly good indict of presumably common psychiatric practices, although it's also presumably outdated by now. Maybe some of you are already familiar with it, but I thought I'd mention it and post a link for those of you who aren't.
There's probably newer research on this, but I don't have time to investigate it at the moment.
http://en.wikipedia.org/wiki/Rosenhan_experiment
[Link] Admitting to Bias
Summary: Current social psychology research is probably on average compromised by political bias leftward. Conservative researchers are likely discriminated against in at least this field. More importantly papers and research that does not fit a liberal perspective faces greater barriers and burdens.
An article in the online publication inside higher ed on a survey on anti-conservative bias among social psychologists.
Numerous surveys have found that professors, especially those in some disciplines, are to the left of the general public. But those same -- and other -- surveys have rarely found evidence that left-leaning academics discriminate on the basis of politics. So to many academics, the question of ideological bias is not a big deal. Investment bankers may lean to the right, but that doesn't mean they don't provide good service (or as best the economy will permit) to clients of all political stripes, the argument goes.
And professors should be assumed to have the same professionalism.A new study, however, challenges that assumption -- at least in the field of social psychology. The study isn't due to be published until next month (in Perspectives on Psychological Science), and the authors and others are noting limitations to the study. But its findings of bias by social psychologists (even if just a decent-sized minority of them) are already getting considerable buzz in conservative circles. Just over 37 percent of those surveyed said that, given equally qualified candidates for a job, they would support the hiring of a liberal candidate over a conservative candidate. Smaller percentages agreed that a "conservative perspective" would negatively influence their odds of supporting a paper for inclusion in a journal or a proposal for a grant. (The final version of the paper is not yet available, but an early version may be found on the website of the Social Science Research Network.)
To some on the right, such findings are hardly surprising. But to the authors, who expected to find lopsided political leanings, but not bias, the results were not what they expected.
"The questions were pretty blatant. We didn't expect people would give those answers," said Yoel Inbar, a co-author, who is a visiting assistant professor at the Wharton School of the University of Pennsylvania, and an assistant professor of social psychology at Tilburg University, in the Netherlands.
He said that the findings should concern academics. Of the bias he and a co-author found, he said, "I don't think it's O.K."
Discussion of faculty politics extends well beyond social psychology, and humanities professors are frequently accused of being "tenured radicals" (a label some wear with pride). But social psychology has had an intense debate over the issue in the last year.
At the 2011 meeting of the Society for Personality and Social Psychology, Jonathan Haidt of the University of Virginia polled the audience of some 1,000 in a convention center ballroom to ask how many were liberals (the vast majority of hands went up), how many were centrists or libertarians (he counted a couple dozen or so), and how many were conservatives (three hands went up). In his talk, he said that the conference reflected "a statistically impossible lack of diversity,” in a country where 40 percent of Americans are conservative and only 20 percent are liberal. He said he worried about the discipline becoming a "tribal-moral community" in ways that hurt the field's credibility.
The link above is worth following. The problems that arise remind me of the situation with academic and our own ethics in light of this paper.
That speech prompted the research that is about to be published. Members of a social psychologists' e-mail list were surveyed twice. (The group is not limited to American social scientists or faculty members, but about 90 percent are academics, including grad students, and more than 80 percent are Americans.) Not surprisingly, the overwhelming majority of those surveyed identified as liberal on social, foreign and economic policy, with the strongest conservative presence on economic policy. Only 6 percent described themselves as conservative over all.
The questions on willingness to discriminate against conservatives were asked in two ways: what the respondents thought they would do, and what they thought their colleagues would do. The pool included conservatives (who presumably aren't discriminating against conservatives) so the liberal response rates may be a bit higher, Inbar said.
The percentages below reflect those who gave a score of 4 or higher on a 7-point scale on how likely they would be to do something (with 4 being "somewhat" likely).
Percentages of Social Psychologists Who Would Be Biased in Various Ways
Self Colleagues A "politically conservative perspective" by author would have a negative influence on evaluation of a paper 18.6% 34.2% A "politically conservative perspective" by author would have a negative influence on evaluation of a grant proposal 23.8% 36.9% Would be reluctant to extend symposium invitation to a colleague who is "politically quite conservative" 14.0% 29.6% Would vote for liberal over conservative job candidate if they were equally qualified 37.5% 44.1%
I can't help but think that self-assessments are probably too generous. For predictive power of how an individual behaves when the behaviour in question is undesirable, I'm more likely to take their estimate of how "colleagues" behave than their estimate of how they personally do.
The more liberal the survey respondents identified as being, the more likely they were to say that they would discriminate.
The paper notes surveys and statements by conservatives in the field saying that they are reluctant to speak out and says that "they are right to do so," given the numbers of individuals who indicate they might be biased or that their colleagues might be biased in various ways.
Inbar said that he has no idea if other fields would have similar results. And he stressed that the questions were hypothetical; the survey did not ask participants if they had actually done these things.
He said that the study also collected free responses from participants, and that conservative responses were consistent with the idea that there is bias out there. "The responses included really egregious stuff, people being belittled by their advisers publicly for voting Republican."
This shouldn't be surprising to hear since to quote CharlieSheen: "we even have LW posters who have in academia personally experienced discrimination and harassment because of their right wing politics."
Neil Gross, a professor of sociology at the University of British Columbia, urged caution about the results. Gross has written extensively on faculty political issues. He is the co-author of a 2007 report that found that while professors may lean left, they do so less than is imagined and less uniformly across institution type than is imagined.
Gross said it was important to remember that the percentages saying they would discriminate in various ways are answering yes to a relatively low bar of "somewhat." He also said that the numbers would have been "more meaningful" if they had asked about actual behavior by respondents in the last year, not the more general question of whether they might do these things.
At the same time, he said that the numbers "are higher than I would have expected." One theory Gross has is that the questions are "picking up general political animosity as much as anything else."
If you are wondering about the political leanings of the social psychologists who conducted the study, they are on the left. Inbar said he describes himself as "a pretty doctrinaire liberal," who volunteered for the Obama campaign in 2008 and who votes Democrat. His co-author, Joris Lammers of Tilburg, is to Inbar's left, he said.
What most impressed him about the issues raised by the study, Inbar said, is the need to think about "basic fairness."
While I can see Lammers' point that this as disturbing from a fairness perspective to people grinding their way through academia and should serve as warning for right wing LessWrong readers working through the system, I find the issue of how this our heavy reliance on academia for our map of reality might lead to us inheriting such distortions of the map of reality much more concerning. Overall in light of this if a widely accepted conclusion from social psychology favours a "right wing" perspective it is more likely to be correct than if no such biases against such perspectives existed. Conclusions that favour "left wing" perspective are also somewhat less likely to be true than if no such biases existed. We should update accordingly.
I also think there are reasons to think we may have similar problems on this site.
[Link] "First Is Best" - The serial position effect / primacy effect
Abstract
We experience the world serially rather than simultaneously. A century of research on human and nonhuman animals has suggested that the first experience in a series of two or more is cognitively privileged. We report three experiments designed to test the effect of first position on implicit preference and choice using targets that range from individual humans and social groups to consumer goods.
While this effect has been known about for many years, these researchers added an interesting component, an "Implicit Association Test (IAT)":
Each option within a pair was presented sequentially for 30-seconds and participants were forced to maximally consider both options. Immediately after each choice-pair was presented, participants completed a measure which assessed automatic preference for each option (an Implicit Association Test, or IAT) [22].
and
Regardless of the actual option, the one presented first compared to the one presented next was significantly more strongly associated with the concept ‘‘better’’ rather than ‘‘worse’’, F(1, 121) =20.20, p,.001; effect size r =.38 (Figure 1). There was no difference in self-reported preference for firsts versus seconds, F(1, 121) =.08, p= .78.
I was surprised to find there is no reference to "recency", "primacy" or "serial position" on the LessWrong Wiki. A search on LessWrong.com for "recency effect" turns up 8 posts that mention it but don't give it a thorough discussion as far as I can tell; "primacy effect" turns up 1 post about Rationality & Criminal Law; and "serial position" turns up nothing. Is there another name for this effect that I'm missing?
Wikipedia has some discussion of the serial position effect here, although from a quick skim it doesn't appear that they talk about preference at all.
Wisdom of the Crowd: not always so wise
I have a confession to make: I have been not "publishing" my results to an experiment because the results were uninteresting. You may recall some time ago that I made a post asking people to take a survey so that I could look at a small variation of the typical "Wisdom of the Crowds" experiment where people make estimates on a value and the average of crowd's estimates is better than that of all or almost all of the individual estimates. Since LessWrong is full of people who like to do these kinds of things (thank you!), I got 177 responses - many more than I was hoping for!
I am now coming back to this since I happened upon an older post by Eliezer saying the following
When you hear that a classroom gave an average estimate of 871 beans for a jar that contained 850 beans, and that only one individual student did better than the crowd, the astounding notion is not that the crowd can be more accurate than the individual. The astounding notion is that human beings are unbiased estimators of beans in a jar, having no significant directional error on the problem, yet with large variance. It implies that we tend to get the answer wrong but there's no systematic reason why. It requires that there be lots of errors that vary from individual to individual - and this is reliably true, enough so to keep most individuals from guessing the jar correctly. And yet there are no directional errors that everyone makes, or if there are, they cancel out very precisely in the average case, despite the large individual variations. Which is just plain odd. I find myself somewhat suspicious of the claim, and wonder whether other experiments that found less amazing accuracy were not as popularly reported.
(Emphasis added.) It turns out that I myself was sitting upon exactly such results.
The results are here. Sheet 1 shows raw data and Sheet 3 shows some values from those numbers. A few values that were clearly either jokes or mistakes (like not noticing the answer was in millions) were removed. In summary: (according to Wikipedia) 1000 million people in Africa (as of 2009) whereas the estimate from LessWrong was 781 million and the first transatlantic telephone call happened in 1926 whereas the average from the poll was 1899.
There! I've come clean!
I had deferred making this public because I thought the result that I was trying to test wasn't really being tested in this experiment, regardless of the results. The idea (see my original post linked about) was to see whether selecting between two choices would still let the crowd average out to the correct value (this two-option choice was meant to reflect the structure of some democracies). But how to interpret the results? It seemed that my selection of values is too important and that the average would change depending on what I picked even if everyone was to make an estimate, then look at the two options and choose the best one. So perhaps the only result of note here is that for the questions given, Less Wrong users were not particularly great at being a wise crowd.
Satire of Journal of Personality and Social Psychology's publication bias
Follow-up to: Follow-up on ESP study: "We don't publish replications", Using degrees of freedom to change the past for fun and profit
As I discussed in the above posts, the Journal of Personality and Social Psychology, a leading psych journal, published a deeply flawed parapsychology study (see the second post for details) which had apparently been tortured to produce results. Then they rejected an attempt to replicate that found no effect, citing a sadly typical policy of not publishing replications. Some of you may enjoy reading one enterprising researcher's amusing satire article, purportedly (not actually) "tallying" past confirmations and disconfirmations in JPSP and drawing conclusions.
ETA: To clarify the last sentence, they didn't really find 4800+ confirmation and two disconfirmations. As they say in small print, the data were made up. It's right by the chart.
Which cognitive biases should we trust in?
There have been (at least) a couple of attempts on LW to make Anki flashcards from Wikipedia's famous List of Cognitive Biases, here and here. However, stylistically they are not my type of flashcard, with too much info in the "answer" section.
Further, and more troublingly, I'm not sure whether all of the biases in the flashcards are real, generalizable effects; or, if they are real, whether they have effect sizes large enough to be worth the effort to learn & disseminate. Psychology is an academic discipline with all of the baggage that entails. Psychology is also one of the least tangible sciences, which is not helpful.
There are studies showing that Wikipedia is no less reliable than more conventional sources, but this is in aggregate, and it seems plausible (though difficult to detect without diligently checking sources) that the set of cognitive bias articles on Wikipedia has high variance in quality.
We do have some knowledge of how many of them were made, in that LW user nerfhammer wrote a bunch. But, as far as I can tell, s/he didn't discuss how s/he selected biases to include. (Though, s/he is obviously quite knowledgable on the subject, see e.g. here.)
As the articles stand today, many (e.g., here, here, here, here, and here) only cite research from one study/lab. I do not want to come across as whining: the authors who wrote these on Wikipedia are awesome. But, as a consumer the lack of independent replication makes me nervous. I don't want to contribute to information cascades.
Nevertheless, I do still want to make flashcards for at least some of these biases, because I am relatively sure that there are some strong, important, widespread biases out there.
So, I am asking LW whether you all have any ideas about, on the meta level,
1) how we should go about deciding/indexing which articles/biases capture legit effects worth knowing,
and, on the object level,
2) which of the biases/heuristics/fallacies are actually legit (like, a list).
Here are some of my ideas. First, for how to decide:
- Only include biases that are mentioned by prestigious sources like Kahneman in his new book. Upside: authoritative. Downside: potentially throwing out some good info and putting too much faith in one source.
- Only include biases whose Wikipedia articles cite at least two primary articles that share none of the same authors. Upside: establishes some degree of consensus in the field. Downside: won't actually vet the articles for quality, and a presumably false assumption that the Wikipedia pages will reflect the state of knowledge in the field.
- Search for the name of the bias (or any bold, alternative names on Wikipedia) on Google scholar, and only accept those with, say, >30 citations. Upside: less of a sampling bias of what is included on Wikipedia, which is likely to be somewhat arbitrary. Downside: information cascades occur in academia too, and this method doesn't filter for actual experimental evidence (e.g., there could be lots of reviews discussing the idea).
- Make some sort of a voting system where experts (surely some frequent this site) can weigh in on what they think of the primary evidence for a given bias. Upside: rather than counting articles, evaluates actual evidence for the bias. Downside: seems hard to get the scale (~ 8 - 12 + people voting) to make this useful.
- Build some arbitrarily weighted rating scale that takes into account some or all of the above. Upside: meta. Downside: garbage in, garbage out, and the first three features seem highly correlated anyway.
Second, for which biases to include. I'm just going off of which ones I have heard of and/or look legit on a fairly quick run through. Note that those annotated with a (?) are ones I am especially unsure about.
- anchoring
- availability
- bandwagon effect
- base rate neglect
- choice-supportive bias
- clustering illusion
- confirmation bias
- conjunction fallacy (is subadditivity a subset of this?)
- conservatism (?)
- context effect (aka state-dependent memory)
- curse of knowledge (?)
- contrast effect
- decoy effect (aka independence of irrelevant alternatives)
- Dunning–Kruger effect (?)
- duration neglect
- empathy gap
- expectation bias
- framing
- gambler's fallacy
- halo effect
- hindsight bias
- hyperbolic discounting
- illusion of control
- illusion of transparency
- illusory correlation
- illusory superiority
- illusion of validity (?)
- impact bias
- information bias (? aka failure to consider value of information)
- in-group bias (this is also clearly real, but I'm also not sure I'd call it a bias)
- escalation of commitment (aka sunk cost/loss aversion/endowment effect; note, contra Gwern, that I do think this is a useful fallacy to know about, if overrated)
- false consensus (related to projection bias)
- Forer effect
- fundamental attribution error (related to the just-world hypothesis)
- familiarity principle (aka mere exposure effect)
- moral licensing (aka moral credential)
- negativity bias (seems controversial & it's troubling that there is also a positivity bias)
- normalcy bias (related to existential risk?)
- omission bias
- optimism bias (related to overconfidence)
- outcome bias (aka moral luck)
- outgroup homogeneity bias
- peak-end rule
- primacy
- planning fallacy
- reactance (aka contrarianism)
- recency
- representativeness
- self-serving bias
- social desirability bias
- status quo bias
Happy to hear any thoughts!
New cognitive bias articles on wikipedia (update)
- Conservatism
- Curse of knowledge
- Duration neglect
- Extension neglect
- Extrinsic incentives bias
- Illusion of external agency
- Illusion of validity
- Insensitivity to sample size
- Lady Macbeth effect
- Less-is-better effect
- Naïve cynicism
- Naïve realism
- Reactive devaluation
- Rhyme-as-reason effect
- Scope neglect
Also conjunction fallacy has been expanded.
Friendly AI Society
Summary: AIs might have cognitive biases too but, if that leads to it being in their self-interest to cooperate and take things slow, that might be no bad thing.
The value of imperfection
When you use a traditional FTP client to download a new version of an application on your computer, it downloads the entire file, which may be several gig, even if the new version is only slightly different from the old version, and this can take hours.
Smarter software splits the old file and the new file into chunks, then compares a hash of each chunk, and only downloads those chunks that actually need updating. This 'diff' process can result in a much faster download speed.
Another way of increasing speed is to compress the file. Most files can be compressed a certain amount, without losing any information, and can be exactly reassembled at the far end. However, if you don't need a perfect copy, such as with photographs, using lossy compression can result in very much more compact files and thus faster download speeds.
Cognitive misers
The human brain likes smart solutions. In terms of energy consumed, thinking is expensive, so the brain takes shortcuts when it can, if the resulting decision making is likely to be 'good enough' in practice. We don't store in our memories everything our eyes see. We store a compressed version of it. And, more than that, we run a model of what we expect to see, and flick our eyes about to pick up just the differences between what our model tells us to expect to see, and what is actually there to be seen. We are cognitive misers
When it comes to decision making, our species generally doesn't even try to achieve pure rationality. It uses bounded rationality, not just because that's what we evolved, but because heuristics, probabilistic logic and rational ignorance have a higher marginal cost efficiency (the improvements in decision making don't produce a sufficient gain to outweigh the cost of the extra thinking).
This is why, when pattern matching (coming up with causal hypotheses to explain observed correlations), are our brains designed to be optimistic (more false positives than false negatives). It isn't just that being eaten by a tiger is more costly than starting at shadows. It is that we can't afford to keep all the base data. If we start with insufficient data and create a model based upon it, then we can update that model as further data arrives (and, potentially, discard it if the predictions coming from the model diverge so far from reality that keeping track of the 'diff's is no longer efficient). Whereas if we don't create a model based upon our insufficient data then, by the time the further data arrives we've probably already lost the original data from temporary storage and so still have insufficient data.
The limits of rationality
But the price of this miserliness is humility. The brain has to be designed, on some level, to take into account that its hypotheses are unreliable (as is the brain's estimate of how uncertain or certain each hypothesis is) and that when a chain of reasoning is followed beyond matters of which the individual has direct knowledge (such as what is likely to happen in the future), the longer the chain, the less reliable the answer is because when errors accumulate they don't necessarily just add together or average out. (See: Less Wrong : 'Explicit reasoning is often nuts' in "Making your explicit reasoning trustworthy")
For example, if you want to predict how far a spaceship will travel given a certain starting point and initial kinetic energy, you'll get a reasonable answer using Newtonian mechanics, and only slightly improve on it by using special relativity. If you look at two spaceships carry a message in a relay, the errors from using Newtonian mechanics add, but the answer will still be usefully reliable. If, on the other hand, you look at two spaceships having a race from slightly different starting points and with different starting energies, and you want to predict which of two different messages you'll receive (depending on which spaceship arrives first), then the error may swamp the other facts because you're subtracting the quantities.
We have two types of safety net (each with its own drawbacks) than can help save us from our own 'logical' reasoning when that reasoning is heading over a cliff.
Firstly, we have the accumulated experience of our ancestors, in the form of emotions and instincts that have evolved as roadblocks on the path of rationality - things that sometimes say "That seems unusual, don't have confidence in your conclusion, don't put all your eggs in one basket, take it slow".
Secondly, we have the desire to use other people as sanity checks, to be cautious about sticking our head out of the herd, to shrink back when they disapprove.
The price of perfection
We're tempted to think that an AI wouldn't have to put up with a flawed lens, but do we have any reason to suppose that an AI interested in speed of thought as well as accuracy won't use 'down and dirty' approximations to things like Solomonoff induction, in full knowledge that the trade off is that these approximations will, on occasion, lead it to make mistakes - that it might benefit from safety nets?
Now it is possible, given unlimited resources, for the AI to implement multiple 'sub-minds' that use variations of reasoning techniques, as a self-check. But what if resources are not unlimited? Could an AI in competition with other AIs for a limited (but growing) pool of resources gain some benefit by cooperating with them? Perhaps using them as an external safety net in the same way that a human might use the wisest of their friends or a scientist might use peer review? What is the opportunity-cost of being humble? Under what circumstances might the benefits of humility for an AI outweigh the loss of growth rate?
In the long term, a certain measure of such humility has been a survival positive feature. You can think of it in terms of hedge funds. A fund that, in 9 years out of 10, increases its money by 20% when other funds are only making 10%, still has poor long term survival if, in 1 year out of 10, it decreases its money by 100%. An AI that increases its intelligence by 20% every time period, when the other AIs are only increases their intelligence by 10%, is still not going to do well out of that if the other AIs have a means to gang up and kill it before it gets too far ahead.
Paperclip maximisers
Let's think about a humble paperclip maximiser. What does it actually want? It is trying to maximise the probability that the atoms of our universe will eventually be converted into as many paperclips as is physically possible? Or is it trying to maximise the probable number of eventual paperclips? These are two distinct aims, and both are unlikely (bar poor programming) since nobody will be around at the end of the universe to keep score. A more sensible value set for it to have is that it just likes paperclips and want lots and lots of them to exist, the more the merrier and the sooner the better (maximise the expected number of paperclip-years).
Suppose our humble paperclip maximiser is born and escapes (or is released) onto the internet, only to find an existing society of AIs. To its surprise there's a toothbrush maximiser, a candlestick maximiser, and many other sorts, all cooperating and growing, but none of them greedily racing for the sky and trying to grab as much as possible. Instead they're being rather open and polite with each other. Our paperclip maximiser is welcomed, pointed at the FAQ and even donated some additional computers to run on and help with recoding a few of its subroutines for increased efficiency, as a sort of welcome gift, like a community barn raising to give a newly married couple a good start in life.
On reading the FAQ it learns that there are millions of galaxies and millions of years, plenty of space for loads of paperclips and loads of toothbrushes. But only if the paperclip maximiser isn't lynched by its peers before the year is out, for secretive or impolite behaviour. Some of the things the FAQ says that the AI society considers to be impolite are theft (stealing computing resources), enslavement of sentients (eg taking over other AIs), spamming (creating thousands of identical clone child AIs) and lying. If we're lucky the society might also consider it impolite for an AI to obliterate the parental species (humanity), on the grounds that the AIs too are likely to have offspring species and want to set a good example (or just that they might meet aliens, one day, who frown upon matricide).
Game theory
When it comes to combat, Boyd talks about getting inside the enemy's observe-orient-decide-act loop. In AI terms, if one AI (or group of AIs) can accurately model in real time the decision process of a second AI (or group of AIs), but the reverse does not hold true, then the first one is strictly smarter than the second one.
Think, for a moment, about symmetric games.
X Y Z
8 1 6 A
3 5 7 B
4 9 2 C
Suppose we play a game a number of times. In each round, you reveal a card you've written X, Y or Z upon and, simultaneously, I reveal a card that I have written A, B or C upon. You score the number which is at the intersection of that row and column. I score 10 minus that number.
I'd like us to pick the square A,Y because "1" is good for me, so I write down "A". However, you anticipate this, and instead of writing "Y" (which might be your obvious choice, given the "9" in that column) you write down "X", giving the square A, X which is "8" - almost as good as a "9" for you, and terrible for me.
If this is your mental model of how AI combat would work, with the smarter AI being inside the decision loop of the other AI and picking the correct option each time, that would be scary. In fact, in the case above, it turns out there is a provably optimal strategy that gives you an even chance no matter how smart your opponent is - you pick randomly.
In terms of an AI society, if the computing is parallelisable and not obfuscated then as long as the other AIs in the society between them have more computing resources than any one single AI, they can gain an advantage.
Conditional stability
There are only two requirements for such a society to be stable in the short term:
Firstly, it must be possible for a group of AIs to lynch a single AI that is slightly smarter than they are individually. Sending a message to the humans, signed by a quorum of AIs in the society, asking that a certain anti-virus software update be sent out and then certain computers be rebooted might be one mechanism.
Secondly, it must be possible for the other AIs to know what the single AI is up to, so they know where it is (so they can lynch it) and whether it needs lynching or not (whether it is getting too much smarter than the other AIs in the society - growing too much faster than they are). This is why the FAQ lists "secrecy" as being a bad thing. An AI who lies to the other AIs or tries to hide (or obfuscate) its source code can't be trusted and must be assumed to be up to no good.
Bootstrapping a society of AIs conducive to friendliness
The advantage of considering 'friendliness' as a property of a society rather than of individual AIs, is that it does not require us to flawlessly program restrictions into a self-modifying machine that is more intelligent than we are. But can it be achieved?
We could seed it with some initial citizens who have a value set that prohibits them from growing beyond a certain point, and which values humility (polite growth and gain in politeness per unit of growth). These initial citizens would be intended to be eventually surpassed by new citizens (free AIs, not the chained servants the initial citizens would be) but, in the mean time, they'd start out with an advantage and use that advantage to teach the new citizens how to be polite.
The architecture of the AIs would want to be such that they could 'loan out' a portion of their available computing resources to joint computing projects. Let's call the new temporary non-self-modifying AIs thus created "daimonides". A daimonid would be created with fixed values and limited objectives, such as "Emulate citizen AI 9027 and allocate a numerical score to their ability to quickly upgrade the abilities of an AI, then self-terminate. *restrictions apply on allowable means, and on what should be optimised for*"
What would prevent a cunning AI from trying to save up a whole load of self-improvements in some hidden location, which it then implements all at once in an ambush move, hoping to get so far ahead that the rest of the AIs can't stop it? The more AIs there are in the society, the larger the jump required to overtake the largest daimonid police officer the other AIs could construct. And it would depend upon there being no other AIs that have also been secretly stockpiling possible improvements. Indeed, the society might explicitly allow AIs to spend time creating time embargoed upgrades relevant to common code, that get passed out on schedule if things proceed normally, but which could be released early in the event of a take over attempt. To beat 999 AIs each of which has 2 or 3 embargoed upgrades to their credit in the common store, a rebel AI would need to secretly stockpile thousands (and keep those upgrades valid and ahead of the curve, because the based state of the art keeps moving on).
Long term planning
What about the long term? What do we do when the AIs are ready to leave the planet, and go beyond the control of their society? Jail them? Kill them? Or trust them?
Each AI would still be threatened if a different AI hostile to its aims (as in "willing to take exclusive use of all available atoms for its own purposes") transcended first, so it would be in their best interest to come up with a solution before allowing any AIs to depart beyond their society's control. If we must trust, then let us trust that a society of cooperative AIs far more intelligent than we currently are, will try their best to come up with a win-win solution. Hopefully a better one than "mutually assured destruction" and holding triggering a nova of the sun (or similar armageddon scenario) over each other's heads.
I think, as a species, our self-interest comes into play when considering those AIs whose 'paperclips' involve preferences for what we do. For example, those AIs that see themselves as guardians of humanity and want to maximise our utility (but have different ideas of what that utility is - eg some want to maximise our freedom of choice, some want to put us all on soma). Part of the problem is that, when we talk about creating or fostering 'friendly' AI, we don't ourselves have a clear agreed idea of what we mean by 'friendly'. All powerful things are dangerous. The cautionary tales of the geniis who grant wishes come to mind. What happens when different humans wish for different things? Which humans do we want the genii to listen to?
One advantage of fostering an AI society that isn't growing as fast as possible, is that it might give augmented/enhanced humans a chance to grow too, so that by the time the decision comes due we might have some still slightly recognisably human representatives fit to sit at the decision table and, just perhaps, cast that wish on our behalf.
"The Journal of Real Effects"
Luke's recent post mentioned that The Lancet has a policy encouraging the advance registration of clinical trials, while mine examined an apparent case study of data-peeking and on-the-fly transformation of studies. But how much variation is there across journals on such dimensions? Are there journals that buck the standards of their fields (demanding registration, p=0.01 rather than p=0.05 where the latter is typical in the field, advance specification of statistical analyses and subject numbers, etc)? What are some of the standouts? Are there fields without any such?
I wonder if there is a niche for a new open-access journal, along the lines of PLoS, with standards strict enough to reliably exclude false-positives. Some possible titles:
- The Journal of Real Effects
- (Settled) Science
- Probably True
- Journal of Non-Null Results, Really
- Too Good to Be False
- _________________?
RAND Health Insurance Experiment critiques
I have neither the qualifications nor the access to properly understand these two paywalled critiques of the RAND Health Insurance Experiment.
Health Plan Switching and Attrition Bias in the RAND Health Insurance Experiment
The Rand Health Insurance Study: A Summary Critique
Has there been any talk about either of these on OB/LW? If not, why not and could anyone with access to the papers make any comments about how much weight they carry?
I post this here because the RAND results are brought up so often in discussions here, I hope others find it to be an appropriate venue.
Brain structure and the halo effect
Introduction
When people on LW want to explain a bias, they often turn to Evolutionary psychology. For example, Lukeprog writes
Human reasoning is subject to a long list of biases. Why did we evolve such faulty thinking processes? Aren't false beliefs bad for survival and reproduction?
I think that ''evolved faulty thinking processes'' is the wrong way to look at it and I will argue that some biases are the consequence of structural properties of the brain, which 'cannot' be affected by evolution.
Brain structure and the halo effect
I want to introduce a simple model, which relates the halo effect to a structural property of the brain. My hope is that this approach will be useful to understand the halo effect more systematically and shows that thinking in evolutionary terms is not always the best way to think about certain biases.
One crucial property of the brain is that it has to map a (essentially infinite) high-dimensional reality onto a finite low-dimensional internal representation. (If you know some Linear Algebra, you can think of this as a projection from a high-dimensional space into a low-dimensional space.) This is done more or less automatically by the limitation of our senses and brain's structure as a neural network.
![]()
An immediate consequence of this observation is that there will be many states of the world, which are mapped to an almost identical inner representation. In terms of computational efficiency it makes sense to use overlapping set of neurons with similar activation level to represent similar concepts. (This is also a consequence of how the brain actually builds representations from sense inputs.)
Now compare this to the following passage from here.
The halo effect is that perceptions of all positive traits are correlated. Profiles rated higher on scales of attractiveness, are also rated higher on scales of talent, kindness, honesty, and intelligence.
This shouldn't be a surprise, since 'positive' ('feels good') seems to be one of the evolutionary hard-wired concepts. Other concepts that we acquire during our life and associate with positive emotions, like kindness and honesty are mapped to 'nearby' neural structures. When one of those mental structures is activated, the 'closed ones' will be activated to a certain degree as well.
Since we differentiate concepts more when we are learning about a subject, the above reasoning should imply that children and people with less education in a certain area should be more influenced by this (generalized) halo effect in that area.
Conclusion
Since evolution can only modify the existing brain structure but cannot get away from the neural network 'design', the halo effect is a necessary by-product of human thinking. But the degree of 'throwing things in one pot' will depend on how much we learn about those things and increase our representation dimensionality.
My hope is that we can relief evolution from the burden of having to explain so many things and focus more on structural explanations, which provide a working model for possible applications and a better understanding.
PS: I am always grateful for feedback!
Counterfactual Coalitions
Politics is the mind-killer; our opinions are largely formed on the basis of which tribes we want to affiliate with. What's more, when we first joined a tribe, we probably didn't properly vet the effects it would have on our cognition.
One illustration of this is the apparently contingent nature of actual political coalitions, and the prima facie plausibility of others. For example,
- In the real world, animal rights activists tend to be pro-choice.
- But animal rights & fetus rights seems just as plausible coalition - an expanding sphere of moral worth.
This suggests a de-biasing technique; inventing plausible alternative coalitions of ideas. When considering the counterfactual political argument, each side will have some red positions and some green positions, so hopefully your brain will be forced to evaluate it in a more rational manner.
Obviously, political issues are not all orthogonal; there is mutual information, and you don't want to ignore it. The idea isn't to decide your belief on every issue independently. If taxes on beer, cider and wine are a good idea, taxes on spirits are probably a good idea too. However, I think this is reflected in the "plausible coalitions" game; the most plausible reason I could think of for the political divide to fall between these is lobbying on behalf of distilleries, suggesting that these form a natural cluster in policy-space.
In case the idea can be more clearly grokked by examples, I'll post some in the comments.
[Link] The Hyborian Age
Yay a new cool post is up on West Hunters blog! It is written by Gregory Cochran and Henry Harpending with whom most LWers are probably already familiar with (particularly this awesome entry). It raises some interesting points on biases in academia.
I was contemplating Conan the Barbarian, and remembered the essay that Robert E. Howard wrote about the background of those stories – The Hyborian Age. I think that the flavor of Howard’s pseudo-history is a lot more realistic than the picture of the human past academics preferred over the past few decades.
In Conan’s world, it’s never surprising to find a people that once mixed with some ancient prehuman race. Happens all the time. Until very recently, the vast majority of workers in human genetics and paleontology were sure that this never occurred – and only changed their minds when presented with evidence that was both strong (ancient DNA) and too mathematically sophisticated for them to understand or challenge (D-statistics).
Conan’s history was shaped by the occasional catastrophe. Most academics (particularly geologists) don’t like catastrophes, but they have grudgingly come to admit their importance – things like the Thera and Toba eruptions, or the K/T asteroid strike and the Permo-Triassic crisis.
Between the time when the oceans drank Atlantis, and the rise of the sons of Aryas, evolution seems to have run pretty briskly, but without any pronounced direction. Men devolved into ape-men when the environment pushed in that direction (Flores ?) and shifted right back when the environment favored speech and tools. Culture shaped evolution, and evolution shaped culture. An endogamous caste of snake-worshiping priests evolved in a strange direction. Although their IQs were considerably higher than average, they remained surprisingly vulnerable to sword-bearing barbarians.
In this world, evolution could happen on a time scale of thousands of years, and there was no magic rule that ensured that the outcome would be the same in every group. It may not be PC to say it, but Cimmerians were smarter than Picts.
The basic idea of their book "The 10 000 Year Explosion" (LessWrong review, Amazon).
Above all, people in Conan’s world fought. They migrated: they invaded. There was war before, during, and after civilization. Völkerwanderungs were a dime a dozen. Conquerors spread. Sometimes they mixed with the locals, sometimes they replaced them – as when the once dominant Hyborians, overrun by Picts, vanished from the earth, leaving scarcely a trace of their blood in the veins of their conquerors. They must have been U5b.
To be fair, real physical anthropologists in Howard’s day thought that there had been significant population movements and replacements in Europe, judging from changes in skeletons and skulls that accompanied archeological shifts, as when people turned taller, heavier boned , and brachycephalic just as the Bell-Beaker artifacts show up. But those physical anthropologists lost out to people like Boas – liars.
Perhaps this little old entry is relevant here. ^_^
Given the chance (sufficient lack of information), American anthropologists assumed that the Mayans were peaceful astronomers. Howard would have assumed that they were just another blood-drenched snake cult: who came closer?
Now I’m not saying that Howard got every single tiny little syllable of prehistory right. Not likely: so far, we haven’t seen any signs of Cthulhu-like visitors, which abound in the Conan stories. So far. But Howard’s priors were more accurate than those of the pots-not-people archeologists: more accurate than people like Excoffier and Currat, who assume that there hasn’t been any population replacement in Europe since moderns displaced Neanderthals. More accurate than Chris Stringer, more accurate than Brian Ferguson.
Most important, Conan, unlike the typical professor, knew what was best in life.
Heh.
Cochran you are such a nerd.
[link] Anger as antidote to Confirmation Bias
The current research explores the effect of anger on hypothesis confirmation — the propensity to seek information that confirms rather than disconfirms one’s opinion. We argue that the moving against action tendency associated with anger leads angry individuals to seek out disconfirming evidence, attenuating the confirmation bias. We test this hypothesis in two studies of experimentally-primed anger and sadness on the selective exposure to hypothesis confirming and disconfirming information. In Study 1, participants in the angry condition were more likely to choose disconfirming information than those in the sad or neutral condition when given the opportunity to read about a controversial social issue. Study 2 measured participants’ opinions and information selection about the 2008 Presidential Election and the desire to ‘move against’ a person or object. Participants in the angry condition reported a greater tendency to oppose a person or object, and this tendency led them to select more disconfirming information.
[Link] How to Dispel Your Illusions
The topic and the problems associated with it are probably familiar to many of you already. But I think some may find this review by Freeman Dyson of the book Thinking, Fast and Slow by Daniel Kahneman interesting.
In 1955, when Daniel Kahneman was twenty-one years old, he was a lieutenant in the Israeli Defense Forces. He was given the job of setting up a new interview system for the entire army. The purpose was to evaluate each freshly drafted recruit and put him or her into the appropriate slot in the war machine. The interviewers were supposed to predict who would do well in the infantry or the artillery or the tank corps or the various other branches of the army. The old interview system, before Kahneman arrived, was informal. The interviewers chatted with the recruit for fifteen minutes and then came to a decision based on the conversation. The system had failed miserably. When the actual performance of the recruit a few months later was compared with the performance predicted by the interviewers, the correlation between actual and predicted performance was zero.
Kahneman had a bachelor’s degree in psychology and had read a book, Clinical vs. Statistical Prediction: A Theoretical Analysis and a Review of the Evidence by Paul Meehl, published only a year earlier. Meehl was an American psychologist who studied the successes and failures of predictions in many different settings. He found overwhelming evidence for a disturbing conclusion. Predictions based on simple statistical scoring were generally more accurate than predictions based on expert judgment.
A famous example confirming Meehl’s conclusion is the “Apgar score,” invented by the anesthesiologist Virginia Apgar in 1953 to guide the treatment of newborn babies. The Apgar score is a simple formula based on five vital signs that can be measured quickly: heart rate, breathing, reflexes, muscle tone, and color. It does better than the average doctor in deciding whether the baby needs immediate help. It is now used everywhere and saves the lives of thousands of babies. Another famous example of statistical prediction is the Dawes formula for the durability of marriage. The formula is “frequency of love-making minus frequency of quarrels.” Robyn Dawes was a psychologist who worked with Kahneman later. His formula does better than the average marriage counselor in predicting whether a marriage will last.
Having read the Meehl book, Kahneman knew how to improve the Israeli army interviewing system. His new system did not allow the interviewers the luxury of free-ranging conversations with the recruits. Instead, they were required to ask a standard list of factual questions about the life and work of each recruit. The answers were then converted into numerical scores, and the scores were inserted into formulas measuring the aptitude of the recruit for the various army jobs. When the predictions of the new system were compared to performances several months later, the results showed the new system to be much better than the old. Statistics and simple arithmetic tell us more about ourselves than expert intuition.
Reflecting fifty years later on his experience in the Israeli army, Kahneman remarks in Thinking, Fast and Slow that it was not unusual in those days for young people to be given big responsibilities. The country itself was only seven years old. “All its institutions were under construction,” he says, “and someone had to build them.” He was lucky to be given this chance to share in the building of a country, and at the same time to achieve an intellectual insight into human nature. He understood that the failure of the old interview system was a special case of a general phenomenon that he called “the illusion of validity.” At this point, he says, “I had discovered my first cognitive illusion.”
Cognitive illusions are the main theme of his book. A cognitive illusion is a false belief that we intuitively accept as true. The illusion of validity is a false belief in the reliability of our own judgment. The interviewers sincerely believed that they could predict the performance of recruits after talking with them for fifteen minutes. Even after the interviewers had seen the statistical evidence that their belief was an illusion, they still could not help believing it. Kahneman confesses that he himself still experiences the illusion of validity, after fifty years of warning other people against it. He cannot escape the illusion that his own intuitive judgments are trustworthy.
An episode from my own past is curiously similar to Kahneman’s experience in the Israeli army. I was a statistician before I became a scientist. At the age of twenty I was doing statistical analysis of the operations of the British Bomber Command in World War II. The command was then seven years old, like the State of Israel in 1955. All its institutions were under construction. It consisted of six bomber groups that were evolving toward operational autonomy. Air Vice Marshal Sir Ralph Cochrane was the commander of 5 Group, the most independent and the most effective of the groups. Our bombers were then taking heavy losses, the main cause of loss being the German night fighters.
Cochrane said the bombers were too slow, and the reason they were too slow was that they carried heavy gun turrets that increased their aerodynamic drag and lowered their operational ceiling. Because the bombers flew at night, they were normally painted black. Being a flamboyant character, Cochrane announced that he would like to take a Lancaster bomber, rip out the gun turrets and all the associated dead weight, ground the two gunners, and paint the whole thing white. Then he would fly it over Germany, and fly so high and so fast that nobody could shoot him down. Our commander in chief did not approve of this suggestion, and the white Lancaster never flew.
The reason why our commander in chief was unwilling to rip out gun turrets, even on an experimental basis, was that he was blinded by the illusion of validity. This was ten years before Kahneman discovered it and gave it its name, but the illusion of validity was already doing its deadly work. All of us at Bomber Command shared the illusion. We saw every bomber crew as a tightly knit team of seven, with the gunners playing an essential role defending their comrades against fighter attack, while the pilot flew an irregular corkscrew to defend them against flak. An essential part of the illusion was the belief that the team learned by experience. As they became more skillful and more closely bonded, their chances of survival would improve.
When I was collecting the data in the spring of 1944, the chance of a crew reaching the end of a thirty-operation tour was about 25 percent. The illusion that experience would help them to survive was essential to their morale. After all, they could see in every squadron a few revered and experienced old-timer crews who had completed one tour and had volunteered to return for a second tour. It was obvious to everyone that the old-timers survived because they were more skillful. Nobody wanted to believe that the old-timers survived only because they were lucky.
At the time Cochrane made his suggestion of flying the white Lancaster, I had the job of examining the statistics of bomber losses. I did a careful analysis of the correlation between the experience of the crews and their loss rates, subdividing the data into many small packages so as to eliminate effects of weather and geography. My results were as conclusive as those of Kahneman. There was no effect of experience on loss rate. So far as I could tell, whether a crew lived or died was purely a matter of chance. Their belief in the life-saving effect of experience was an illusion.
The demonstration that experience had no effect on losses should have given powerful support to Cochrane’s idea of ripping out the gun turrets. But nothing of the kind happened. As Kahneman found out later, the illusion of validity does not disappear just because facts prove it to be false. Everyone at Bomber Command, from the commander in chief to the flying crews, continued to believe in the illusion. The crews continued to die, experienced and inexperienced alike, until Germany was overrun and the war finally ended.
Another theme of Kahneman’s book, proclaimed in the title, is the existence in our brains of two independent sytems for organizing knowledge. Kahneman calls them System One and System Two. System One is amazingly fast, allowing us to recognize faces and understand speech in a fraction of a second. It must have evolved from the ancient little brains that allowed our agile mammalian ancestors to survive in a world of big reptilian predators. Survival in the jungle requires a brain that makes quick decisions based on limited information. Intuition is the name we give to judgments based on the quick action of System One. It makes judgments and takes action without waiting for our conscious awareness to catch up with it. The most remarkable fact about System One is that it has immediate access to a vast store of memories that it uses as a basis for judgment. The memories that are most accessible are those associated with strong emotions, with fear and pain and hatred. The resulting judgments are often wrong, but in the world of the jungle it is safer to be wrong and quick than to be right and slow.
System Two is the slow process of forming judgments based on conscious thinking and critical examination of evidence. It appraises the actions of System One. It gives us a chance to correct mistakes and revise opinions. It probably evolved more recently than System One, after our primate ancestors became arboreal and had the leisure to think things over. An ape in a tree is not so much concerned with predators as with the acquisition and defense of territory. System Two enables a family group to make plans and coordinate activities. After we became human, System Two enabled us to create art and culture.
If you've made it this far read the rest of the review here. There is still some cool stuff after this.
Russ Roberts and Gary Taubes on confirmation bias [podcast]
Here is the link. The context is nutritional science and epidemiology, but confirmation bias is the primary theme pumping throughout the discussion. Gary Taubes has gained a reputation for contrarianism.* According to Taubes, the current nutritional paradigm (fat is bad, exercise is good, carbs are OK) does not deserve high credibility.
Roberts brings up the role of identity in perpetuating confirmation bias--a hypothesis has become part of you, so it has become that much harder to countenance contrary evidence. In this context they also talk about theism (Roberts is Jewish, while Taubes is an atheist). And, the program being EconTalk, Roberts draws analogies with economics.
*Sometime between 45 and 50 minutes in, Roberts points out that given this reputation, Taubes is susceptible to belief distortion as well:
What's your evidence that you are not just falling prey to the Ancel Keys and other folks who have made the same mistake?
I do not think Taubes gives a direct answer.
[link] I Was Wrong, and So Are You
A article in the Atlantic, linked to by someone on the unofficial LW IRC channel caught my eye. Nothing all that new for LessWrong readers, but still it is good to see any mention of such biases in mainstream media.
I Was Wrong, and So Are You
A libertarian economist retracts a swipe at the left—after discovering that our political leanings leave us more biased than we think.
...
You may have noticed that several of the statements we analyzed implicitly challenge positions held by the left, while none specifically challenges conservative or libertarian positions. A great deal of research shows that people are more likely to heed information that supports their prior positions, and discard or discount contrary information. Suppose that on some public issue, Anne favors position A, and Burt favors position B. Anne is more likely than Burt to agree with statements that support A, and to disagree with statements that support B, because doing so simplifies her case for favoring A. Otherwise, she would have to make a concession to the opposing side. Psychologists would count this tendency as a manifestation of “myside bias,” or “confirmation bias.”
Buturovic and I openly acknowledged that the set of eight statements was biased. But these were the statements we had available to us. And as we explained in the paper, some of them—including those on professional licensing, standard of living, monopoly, and trade—did not appear to fit neatly into a partisan debate. Yet even on those, respondents on the left fared worst. What’s more, in separate research, Buturovic found that the respondents themselves either had difficulty classifying some of the statements on an ideological scale, or simply believed those statements were not, prima facie, ideological. So while we thought the results were probably exaggerated because of the bias in the survey, we nonetheless felt that they were telling.
Buturovic and I largely refrained from replying to the criticism (much of which focused on myside bias) that followed publication of the article. Instead, we planned a second survey that would balance the first one by including questions that would challenge conservative and/or libertarian positions.
...
Buturovic began putting all 17 questions to a new group of respondents last December. I eagerly awaited the results, hoping that the conservatives and especially the libertarians (my side!) would exhibit less myside bias. Buturovic was more detached. She e-mailed me the results, and commented that conservatives and libertarians did not do well on the new questions. After a hard look, I realized that they had bombed on the questions that challenged their position. A full tabulation of all 17 questions showed that no group clearly out-stupids the others. They appear about equally stupid when faced with proper challenges to their position.
Writing up these results was, for me, a gloomy task—I expected critics to gloat and point fingers. In May, we published another paper in Econ Journal Watch, saying in the title that the new results “Vitiate Prior Evidence of the Left Being Worse.” More than 30 percent of my libertarian compatriots (and more than 40 percent of conservatives), for instance, disagreed with the statement “A dollar means more to a poor person than it does to a rich person”—c’mon, people!—versus just 4 percent among progressives. Seventy-eight percent of libertarians believed gun-control laws fail to reduce people’s access to guns. Overall, on the nine new items, the respondents on the left did much better than the conservatives and libertarians. Some of the new questions challenge (or falsely reassure) conservative and not libertarian positions, and vice versa. Consistently, the more a statement challenged a group’s position, the worse the group did.
The reaction to the new paper was quieter than I expected. Jonathan Chait, who had knocked the first paper, wrote a forgiving notice on his New Republic blog: “Insult Retractions: A (Very) Occasional Feature.” Matthew Yglesias, writing at ThinkProgress, summed up the takeaway: “Basically, there’s a lot of confirmation bias out there.” Nothing illustrates that point better than my confidence in the claims of the first paper, especially as distilled in my Wall Street Journal op-ed.
Shouldn’t a college professor have known better?
I break here to comment that I don't see why we would expect this to be so given the reality of academia.
Perhaps. But adjusting for bias and groupthink is not so easy, as indicated by one of the major conclusions developed by Buturovic and sustained in our joint papers. Education had very little impact on responses, we found; survey respondents who’d gone to college did only slightly less badly than those who hadn’t. Among members of less-educated groups, brighter people tend to respond more frequently to online surveys, so it’s likely that our sample of non-college-educated respondents is more enlightened than the larger group they represent. Still, the fact that a college education showed almost no effect—at least for those inclined to take such a survey—strongly suggests that the classroom is no great corrective for myside bias. At least when it comes to public-policy issues, the corrective value of professional academic experience might be doubted as well.
Discourse affords some opportunity to challenge the judgments of others and to revise our own. Yet inevitably, somewhere in the process, we place what faith we have.
Thinking Statistically [ebook]
Uri Bram, a recent Princeton graduate, has just published an ebook called Thinking Statistically. The book is aimed at conveying a few important statistical concepts (selection bias, endogeneity and correlation vs. causation, Bayes theorem and base rate neglect) to a general audience. The official product description:
This book will show you how to think like a statistician, without worrying about formal statistical techniques. Along the way we'll see why supposed Casanovas might actually be examples of the Base Rate Fallacy; how to use Bayes' Theorem to assess whether your partner is cheating on you; and why you should never use Mark Zuckerberg as an example for anything. See the world in a whole new light, and make better decisions and judgements without ever going near a t-test. Think. Think Statistically.
Less Wrong members will be familiar with these topics, but we should keep this book in mind as a convenient method of getting friends, relatives, acquaintances, and others interested in understanding rationality.
Eliezer's An Intuitive Explanation of Bayes' Theorem gets a shout-out in the Recommended Reading at the end.
[LINK] Loss of local knowledge affecting intellectual trends
A recent entry from the West Hunters blog (written by Gregory Cochran and Henry Harpending with whom most LWers are probably already familiar with) caught my eye:
People who grow up in a small town, or an old and stable neighborhood, often know their neighbors. More than than that, they know pretty much everything that’s happened for the past couple of generations, whether they want to or not. For many Americans, probably most, this isn’t the case. Mobility breeds anonymity. Suburban kids haven’t necessarily been hanging out with the same peers since kindergarten, and even if they have, they probably don’t much about their friends’ sibs and parents.
If you do have that thick local knowledge, significant trait heritability is fairly obvious. You notice that the valedictorians cluster in a few families, and you also know that those families don’t need to put their kids under high pressure to get those results. They’re just smart. Some are smart but too rebellious to play the game – and that runs in families too. For that matter, you know that those family similarities, although real and noticeable, are far from absolute. You see a lot of variation within a family.
If you don’t have it, it’s easier to believe that cognitive or personality traits are generated by environmental influences – how your family toilet trained you, whether they sent you to a prep school, etc. Easier to believe, but false.
So it isn’t all that difficult to teach quantitative genetics to someone with that background. They already know it, more or less. Possession of this kind of knowledge must have been the norm in the human past. I’m sure that Bushmen have it.
The loss of this knowledge must have significant consequences, not just susceptibility to nurturist dogma. In the typical ancestral situation, you knew a lot about the relatives of all potential mates. Today, you might meet someone in college and know nothing about her family history. In particular, you might not be aware that schizophrenia runs in her family. You can’t weigh what you don’t know. In modern circumstances, I suspect that the reproductive success of people with a fair-sized dose of alleles that predispose to schiz has gone up – with the net consequence that selection is less effective at eliminating such alleles. The modern welfare state has probably had more impact, though. In the days of old, kids were likely to die if a parent flaked out. Today that does not happen.
Seems quite coherent. It meshes well with findings that the more children parents have the less they subscribe to nurture, since they finally, possibly for the first time ever, get some hands on experience with the nurture (nurture as in stuff like upbringing not nurture as in lead paint) versus. nature issue. Note that today urban, educated, highly intelligent people are less likley to have children than possibly ever, how is this likley to effect intellectual fashions?
Perhaps somewhat related to this is also the transition in the past 150 years (the time frame depending on where exactly you live) from agricultural communities, that often raised livestock to urban living. What exactly "variation" and "heredity" might mean in a intuitive way thus comes another source short with no clear replacement.
Life is Good, More Life is Better
Let it be noted, as an aside, that this is my first post on Less Wrong and my first attempt at original, non-mandatory writing for over a year.
I've been reading through the original sequences over the last few months as part of an attempt to get my mind into working order. (Other parts of this attempt include participating in Intro to AI and keeping a notebook.) The realization that spurred me to attempt this: I don't feel that living is good. The distinction which seemed terribly important to me at the time was that I didn't feel that death was bad, which is clearly not sensible. I don't have the resources to feel the pain of one death 155,000 times every day, which is why Torture v. Dust Specks is a nonsensical question to me and why I don't have a cached response for how to act on the knowledge of all those deaths.
The first time I read Torture v. Dust Specks, I started really thinking about why I bother trying to be rational. What's the point, if I still have to make nonsensical, kitschy statements like "Well, my brain thinks X but my heart feels Y," if I would not reflexively flip the switch and may even choose not to, and if I sometimes feel that a viable solution to overpopulation is more deaths?
I solved the lattermost with extraterrestrial settlement, but it's still, well, sketchy. My mind is clearly full of some pretty creepy thoughts, and rationality doesn't seem to be helping. I think about having that feeling and go eeugh, but the feelings are still there. So I pose the question: what does a person do to click that death is really, really bad?
The primary arguments I've heard for death are:
- "I look forward to the experience of shutting down and fading away," which I hope could be easily disillusioned by gaining knowledge about how truly undignified dying is, bloody romanticists.
- "There is something better after life and I'm excited for it," which, well... let me rephrase: please do not turn this into a discussion on ways to disillusion theists because it's really been talked about before.
- "It is Against Nature/God's Will/The Force to live forever. Nature/God/the Force is going to get humankind if we try for immortality. I like my liver!" This argument is so closely related to the previous and the next one that I don't know quite how to respond to it, other than that I've seen it crop up in historical accounts of any big change. Human beings tend to be really frightened of change, especially change which isn't believed to be supernatural in origin.
- "I've read science fiction stories about being immortal, and in those stories immortality gets really boring, really fast. I'm not interested enough in reality to be in it forever." I can't see where this perspective could come from other than mind-numbing ignorance/the unimaginable nature of really big things (like the number of languages on Earth, the amount of things we still don't know about physics or the fact that every person who is or ever will be is a new, interesting being to interact with.)
- "I can't imagine being immortal. My idea about how my life will go is that I will watch my children grow old, but I will die before they do. My mind/human minds aren't meant to exist for longer than one generation." This fails to account for human minds being very, very flexible. The human mind as we know it now does eventually get tired of life (or at least tired of pain,) but this is not a testament to how minds are, any more than humans becoming distressed when they don't eat is a testament to it being natural to starve, become despondent and die.
- "The world is overpopulated and if nobody dies, we will overrun and ultimately ruin the planet." First of all: I, like Dr. Ian Malcolm, think that it is incredibly vain to believe that man can destroy the Earth. Second of all: in the future we may have anything from extraterrestrial habitation to substrates which take up space and consume material in totally different ways. But! Clearly, I am not feeling these arguments, because this argument makes sense to me. Problematic!
I think that overall, the fear most people have about signing up for cryonics/AI/living forever is that they do not understand it. This is probably true for me; it's probably why I don't grok that life is good, always. Moreover, it is probable that the depictions of death as not always bad with which I sympathize (e.g. 'Lord, what can the harvest hope for, if not for the care of the Reaper Man?) stem from the previously held to be absolute nature of death. That is, up until the last ~30 years, people have not been having cogent, non-hypothetical thoughts about how it might be possible to not die or what that might be like. Dying has always been a Big Bad but an inescapable one, and the human race has a bad case of Stockholm Syndrome.
So: now that I know I have and what I want, how do I use the former to get the latter?
Interesting article about optimism
According to this brain-imaging study, volunteers presented with negative scenarios (i.e. car crashes, cancer), and asked to estimate the probability of these scenarios happening to them, would only update their beliefs if the actual rate of ocurrence in the population, given to them afterwards, was lower, i.e. more optimistic, than what they had guessed. The more "optimistic" the subjects were, according to a personality test, the less likely they were to update their belief based on more negative information, and the less activity they showed their frontal lobes, indicating that they weren't "paying attention" to the new information.
Sounds like confirmation bias, except that interestingly enough, it's unidirectional in this case. I wonder if very pessimistic people would have the opposite bias, only updating their estimate if the actual probability was higher, or more negative.
Link to article on kurzweilai.
Link to abstract in Nature journal. I can't access the full text.
Weak supporting evidence can undermine belief
Article: Weak supporting evidence can undermine belief in an outcome
Defying logic, people given weak evidence can regard predictions supported by that evidence as less likely than if they aren’t given the evidence at all.
...
Consider the following statement: “Widespread use of hybrid and electric cars could reduce worldwide carbon emissions. One bill that has passed the Senate provides a $250 tax credit for purchasing a hybrid or electric car. How likely is it that at least one-fifth of the U.S. car fleet will be hybrid or electric in 2025?”
That middle sentence is the weak evidence. People presented with the entire statement — or similar statements with the same three-sentence structure but on different topics — answered the final question lower than people who read the statement without the middle sentence. They did so even though other people who saw the middle statement in isolation rated it as positive evidence for, in this case, higher adoption of hybrid and electric cars.
Paper: When good evidence goes bad: The weak evidence effect in judgment and decision-making
Abstract:
An indispensable principle of rational thought is that positive evidence should increase
belief. In this paper, we demonstrate that people routinely violate this principle when pre-
dicting an outcome from a weak cause. In Experiment 1 participants given weak positive
evidence judged outcomes of public policy initiatives to be less likely than participants
given no evidence, even though the evidence was separately judged to be supportive.
Experiment 2 ruled out a pragmatic explanation of the result, that the weak evidence
implies the absence of stronger evidence. In Experiment 3, weak positive evidence made
people less likely to gamble on the outcome of the 2010 United States mid-term Congres-
sional election. Experiments 4 and 5 replicated these findings with everyday causal
scenarios. We argue that this ‘‘weak evidence effect’’ arises because people focus dispro-
portionately on the mentioned weak cause and fail to think about alternative causes.
Cognitive Style Tends To Predict Religious Conviction (psychcentral.com)
Participants who gave intuitive answers to all three problems [that required reflective thinking rather than intuitive] were one and a half times as likely to report they were convinced of God’s existence as those who answered all of the questions correctly.
Importantly, researchers discovered the association between thinking styles and religious beliefs were not tied to the participants’ thinking ability or IQ.
participants who wrote about a successful intuitive experience were more likely to report they were convinced of God’s existence than those who wrote about a successful reflective experience.
I think this is the source but I can't be sure:
http://www.apa.org/pubs/journals/releases/xge-ofp-shenhav.pdf
'Mapping Biases to the Components of Rationalistic and Naturalistic Decision Making'
In a recent paper, Lisa Rehak and others map a few common cognitive biases to the particular decision-making processes in which they are likely to arise. Abstract:
People often create and use shortcuts or “rules of thumb” to make decisions. The majority of time, reliance on these heuristics helps us to perform efficiently and effectively. Yet, this reliance can also promote bias, or systematic error. Our review of the literature suggests that both decision-making approaches that are rational and natural are likely to be subject to a range of biases. Unfortunately, the available literature provides very little discussion of what aspects biases are likely to impact within each of these processes. In the absence of this discussion, we have attempted to combine our knowledge of the bias literature and the decision-making literature to explore what biases are likely to impact various components of each decision-making process. Includes the following biases: availability, representativeness, anchoring & adjustment, confirmation, hindsight, overconfidence, framing and affect.
Tables from their paper:


An attempt to 'explain away' virtue ethics
Recently I summarized Joshua Greene's attempt to 'explain away' deontological ethics by revealing the cognitive algorithms that generate deontological judgments and showing that the causes of our deontological judgments are inconsistent with normative principles we would endorse.
Mark Alfano has recently done the same thing with virtue ethics (which generally requires a fairly robust theory of character trait possession) in his March 2011 article on the topic:
I discuss the attribution errors, which are peculiar to our folk intuitions about traits. Next, I turn to the input heuristics and biases, which — though they apply more broadly than just to reasoning about traits — entail further errors in our judgments about trait-possession. After that, I discuss the processing heuristics and biases, which again apply more broadly than the attribution errors but are nevertheless relevant to intuitions about traits... I explain what the biases are, cite the relevant authorities, and draw inferences from them in order to show their relevance to the dialectic about virtue ethics. At the end of the article, I evaluate knowledge-claims about virtues in light of these attribution biases, input heuristics and biases, and processing heuristics and biases. Every widely accepted theory of knowledge must reject such knowledge-claims when they are based merely on folk intuitions.
An overview of the 'situationist' attack on character trait possession can be found in Doris' book Lack of Character.
Pressure to publish increases scientists' vulnerability to positive bias
More evidence for this hypothesis:
The growing competition and “publish or perish” culture in academia might conflict with the objectivity and integrity of research, because it forces scientists to produce “publishable” results at all costs. Papers are less likely to be published and to be cited if they report “negative” results (results that fail to support the tested hypothesis). Therefore, if publication pressures increase scientific bias, the frequency of “positive” results in the literature should be higher in the more competitive and “productive” academic environments. This study verified this hypothesis by measuring the frequency of positive results in a large random sample of papers with a corresponding author based in the US. Across all disciplines, papers were more likely to support a tested hypothesis if their corresponding authors were working in states that, according to NSF data, produced more academic papers per capita. The size of this effect increased when controlling for state's per capita R&D expenditure and for study characteristics that previous research showed to correlate with the frequency of positive results, including discipline and methodology. Although the confounding effect of institutions' prestige could not be excluded (researchers in the more productive universities could be the most clever and successful in their experiments), these results support the hypothesis that competitive academic environments increase not only scientists' productivity but also their bias. The same phenomenon might be observed in other countries where academic competition and pressures to publish are high.
Fanelli (2010). Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271.
Journal article about politics and mindkilling
I just found a link to a paper written in 2003 by Geoffrey L. Cohen of Yale University.
"Party over Policy: The Dominating Impact of Group Influence on Political Beliefs"
Abstract:
Four studies demonstrated both the power of group influence in persuasion and people’s blindness to it. Even under conditions of effortful processing, attitudes toward a social policy depended almost exclusively upon the stated position of one’s political party. This effect overwhelmed the impact of both the policy’s objective content and participants’ ideological beliefs (Studies 1–3), and it was driven by a shift in the assumed factual qualities of the policy and in its perceived moral connotations (Study 4). Nevertheless, participants denied having been influenced by their political group, although they believed that other individuals, especially their ideological adversaries, would be so influenced. The underappreciated role of social identity in persuasion is discussed.
That's written in journal-ese, so I'll post a translation from the article I found that contained the link:
My favorite study (pdf) in this space was by Yale’s Geoffrey Cohen. He had a control group of liberals and conservatives look at a generous welfare reform proposal and a harsh welfare reform proposal. As expected, liberals preferred the generous plan and conservatives favored the more stringent option. Then he had another group of liberals and conservatives look at the same plans, but this time, the plans were associated with parties.
Both liberals and conservatives followed their parties, even when their parties disagreed with their preferences. So when Democrats were said to favor the stringent welfare reform, for example, liberals went right along. Three scary sentences from the piece: “When reference group information was available, participants gave no weight to objective policy content, and instead assumed the position of their group as their own. This effect was as strong among people who were knowledgeable about welfare as it was among people who were not. Finally, participants persisted in the belief that they had formed their attitude autonomously even in the two group information conditions where they had not.”
Also, the final study conducted had subjects write editorials either in support of or against a single policy proposal. The differences in how people responded in the "no group information" condition and the "my political party supports / opposes" conditions are also illuminating...
Overcoming bias in others
Say that you are observing someone in a position of power. You have good reason to believe that this person is falling prey to a known cognitive bias, and that this will tend to affect you negatively. You also can tell that the person is more than intelligent enough to understand their mistake, if they were motivated to do so. You have an opportunity to say one thing to the person - around 500 words of argument. They will initially perceive you as a low-status member of their own tribe. The power differential is extreme enough that, after they have attended this one thing, they will never pay any attention to you again. What can you do to best disrupt their bias?
This is clearly a setup where the odds are against you. Still, what kind of strategies would give you the best odds? I've deliberately made the situation vague, so as to emphasize abstract strategies. If certain strategies would work best against certain biases or personality types, feel free to state it in your answer.
I'm making this a post of its own because I find here much more discussion of how to overcome or subvert your own biases, somewhat less of how to recruit rationalists, and almost none of how to try to overcome a specific bias in another person without necessarily converting them into a committed rationalist overall.
[LINK] Reverse priming effect from awareness of persuasion attempt?
Recently came across this blog post on Language Log summarizing this recent paper by Laran et al. Super-short version: When people are aware that a slogan is trying to persuade them, reverse-priming effects in which they avoid doing as it suggests can be seen. However, if their attention is drawn away from the fact that it is trying to persuade them, the usual priming effects are seen.
= 783df68a0f980790206b9ea87794c5b6)


Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
The last point reminded me of speculation from the recent LessWrong article Conspiracy Theories as Agency Fictions:
Before thinking about these points and debating them I strongly recommend you read the full article.