Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

"Flinching away from truth” is often about *protecting* the epistemology

60 AnnaSalamon 20 December 2016 06:39PM

Related to: Leave a line of retreat; Categorizing has consequences.

There’s a story I like, about this little kid who wants to be a writer.  So she writes a story and shows it to her teacher.  

“You misspelt the word ‘ocean’”, says the teacher.  

“No I didn’t!”, says the kid.  

The teacher looks a bit apologetic, but persists:  “‘Ocean’ is spelt with a ‘c’ rather than an ‘sh’; this makes sense, because the ‘e’ after the ‘c’ changes its sound…”  

No I didn’t!” interrupts the kid.  

“Look,” says the teacher, “I get it that it hurts to notice mistakes.  But that which can be destroyed by the truth should be!  You did, in fact, misspell the word ‘ocean’.”  

“I did not!” says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: “I did not misspell the word!  I can too be a writer!”.

continue reading »

Bayes for Schizophrenics: Reasoning in Delusional Disorders

88 Yvain 13 August 2012 07:22PM

Related to: The Apologist and the Revolutionary, Dreams with Damaged Priors

Several years ago, I posted about V.S. Ramachandran's 1996 theory explaining anosognosia through an "apologist" and a "revolutionary".

Anosognosia, a condition in which extremely sick patients mysteriously deny their sickness, occurs during right-sided brain injury but not left-sided brain injury. It can be extraordinarily strange: for example, in one case, a woman whose left arm was paralyzed insisted she could move her left arm just fine, and when her doctor pointed out her immobile arm, she claimed that was her daughter's arm even though it was obviously attached to her own shoulder. Anosognosia can be temporarily alleviated by squirting cold water into the patient's left ear canal, after which the patient suddenly realizes her condition but later loses awareness again and reverts back to the bizarre excuses and confabulations.

Ramachandran suggested that the left brain is an "apologist", trying to justify existing theories, and the right brain is a "revolutionary" which changes existing theories when conditions warrant. If the right brain is damaged, patients are unable to change their beliefs; so when a patient's arm works fine until a right-brain stroke, the patient cannot discard the hypothesis that their arm is functional, and can only use the left brain to try to fit the facts to their belief.

In the almost twenty years since Ramachandran's theory was published, new research has kept some of the general outline while changing many of the specifics in the hopes of explaining a wider range of delusions in neurological and psychiatric patients. The newer model acknowledges the left-brain/right-brain divide, but adds some new twists based on the Mind Projection Fallacy and the brain as a Bayesian reasoner.

continue reading »

When None Dare Urge Restraint, pt. 2

56 Jay_Schweikert 30 May 2012 03:28PM

In the original When None Dare Urge Restraint post, Eliezer discusses the dangers of the "spiral of hate" that can develop when saying negative things about the Hated Enemy trumps saying accurate things. Specifically, he uses the example of how the 9/11 hijackers were widely criticized as "cowards," even though this vice in particular was surely not on their list. Over this past Memorial Day weekend, however, it seems like the exact mirror-image problem played out in nearly textbook form.

The trouble began when MSNBC host Chris Hayes noted* that he was uncomfortable with how people use the word "hero" to describe those who die in war -- in particular, because he thinks this sort of automatic valor attributed to the war dead makes it easier to justify future wars. And as you might expect, people went crazy in response, calling Hayes's comments "reprehensible and disgusting," something that "commie grad students would say," and that old chestnut, apparently offered without a hint of irony, "unAmerican." If you watch the video, you can tell that Hayes himself is really struggling to make the point, and by the end he definitely knew he was going to get in trouble, as he started backpedaling with a "but maybe I'm wrong about that." And of course, he apologized the very next day, basically stating that it was improper to have "opine[d] about the people who fight our wars, having never dodged a bullet or guarded a post or walked a mile in their boots."

This whole episode struck me as particularly frightening, mostly because Hayes wasn't even offering a criticism. Soldiers in the American military are, of course, an untouchable target, and I would hardly expect any attack on soldiers to be well received, no matter how grounded. But what genuinely surprised me in this case was that Hayes was merely saying "let's not automatically apply the single most valorizing word we have, because that might cause future wars, and thus future war deaths." But apparently anything less than maximum praise was not only incorrect, but offensive.

Of course, there's no shortage of rationality failures in political discourse, and I'm obviously not intending this post as a political statement about any particular war, policy, candidate, etc. But I think this example is worth mentioning, for two main reasons. First, it's just such a textbook example of the exact sort of problem discussed in Eliezer's original post, in a purer form than I can recall seeing since 9/11 itself. I don't imagine many LW members need convincing in this regard, but I do think there's value in being mindful of this sort of problem on the national stage, even if we're not going to start arguing politics ourselves.

But second, I think this episode says something not just about nationalism, but about how people approach death more generally. Of course, we're all familiar with afterlifism/"they're-in-a-better-place"-style rationalizations of death, but labeling a death as "heroic" can be a similar sort of rationalization. If a death is "heroic," then there's at least some kind of silver lining, some sense of justification, if only partial justification. The movie might not be happy, but it can still go on, and there's at least a chance to play inspiring music. So there's an obvious temptation to label death as "heroic" as much as possible -- I'm reminded of how people tried to call the 9/11 victims "heroes," apparently because they had the great courage to work in buildings that were targeted in a terrorist attack.

If a death is just a tragedy, however, you're left with a more painful situation. You have to acknowledge that yes, really, the world isn't fair, and yes, really, thousands of people -- even the Good Guy's soldiers! -- might be dying for no good reason at all. And even for those who don't really believe in an afterlife, facing death on such a large scale without the "heroic" modifier might just be too painful. The obvious problem, of course -- and Hayes's original point -- is that this sort of death-anesthetic makes it all too easy to numb yourself to more death. If you really care about the problem, you have to face the sheer tragedy of it. Sometimes, all you can say is "we shall have to work faster." And I think that lesson's as appropriate on Memorial Day as any other.

*I apologize that this clip is inserted into a rather low-brow attack video. At the time of posting it was the only link on Youtube I could find, and I wanted something accessible.

SotW: Avoid Motivated Cognition

20 Eliezer_Yudkowsky 28 May 2012 03:57PM

(The Exercise Prize series of posts is the Center for Applied Rationality asking for help inventing exercises that can teach cognitive skills.  The difficulty is coming up with exercises interesting enough, with a high enough hedonic return, that people actually do them and remember them; this often involves standing up and performing actions, or interacting with other people, not just working alone with an exercise booklet and a pencil.  We offer prizes of $50 for any suggestion we decide to test, and $500 for any suggestion we decide to adopt.  This prize also extends to LW meetup activities and good ideas for verifying that a skill has been acquired.  See here for details.)


The following awards have been made:  $550 to Palladias, $550 to Stefie_K, $50 to lincolnquirk, and $50 to John_Maxwell_IV.  See the bottom for details.  If you've earned a prize, please PM StephenCole to claim it.  (If you strongly believe that one of your suggestions Really Would Have Worked, consider trying it at your local Less Wrong meetup.  If it works there, send us some participant comments; this may make us update enough to test it.)


Lucy and Marvin are walking down the street one day, when they pass a shop showing a large chocolate cake in the window.

"Hm," says Lucy, "I think I'll buy and eat that chocolate cake."

"What, the whole thing?" says Marvin.  "Now?"

"Yes," says Lucy, "I want to support the sugar industry."

There is a slight pause.

"I don't suppose that your liking chocolate cake has anything to do with your decision?" says Marvin.

"Well," says Lucy, "I suppose it could have played a role in suggesting that I eat a whole chocolate cake, but the reason why I decided to do it was to support the sugar industry.  Lots of people have jobs in the sugar industry, and they've been having some trouble lately."


Motivated cognition is the way (all? most?) brains generate false landscapes of justification in the presence of attachments and flinches.  It's not enough for the human brain to attach to the sunk cost of a PhD program, so that we are impelled in our actions to stay - no, that attachment can also go off and spin a justificational landscape to convince the other parts of ourselves, even the part that knows about consequentialism and the sunk cost fallacy, to stay in the PhD program.

We're almost certain that the subject matter of "motivated cognition" isn't a single unit, probably more like 3 or 8 units.  We're also highly uncertain of where to start teaching it.  Where we start will probably end up being determined by where we get the best suggestions for exercises that can teach it - i.e., end up being determined by what we (the community) can figure out how to teach well.

The cognitive patterns that we use to actually combat motivated cognition seem to break out along the following lines:

  1. Our conceptual understanding of 'motivated cognition', and why it's defective as a cognitive algorithm - the "Bottom Line" insight.
  2. Ways to reduce the strength of the rationalization impulse, or restore truth-seeking in the presence of motivation: e.g., Anna's "Become Curious" technique.
  3. Noticing the internal attachment or internal flinch, so that you can invoke the other skills; realizing when you're in a situation that makes you liable to rationalize.
  4. Realigning the internal parts that are trying to persuade each other: belief-alief or goal-urge reconciliation procedures.

And also:

  • Pattern recognition of the many styles of warped justification landscape that rationalization creates - being able to recognize "motivated skepticism" or "rehearsing the evidence" or "motivated uncertainty".
  • Specific counters to rationalization styles, like "Set betting odds" as a counter to motivated uncertainty.

Exercises to teach all of these are desired, but I'm setting apart the Rationalization Patterns into a separate SotW, since there are so many that I'm worried 1-4 won't get fair treatment otherwise.  This SotW will focus on items 1-3 above; #4 seems like more of a separate unit.

continue reading »

Are Deontological Moral Judgments Rationalizations?

37 lukeprog 16 August 2011 04:40PM

In 2007, Chris Matthews of Hardball interviewed David O'steen, executive director of a pro-life organization. Matthews asked:

I have always wondered something about the pro-life movement. If you believe that killing [a fetus] is murder, why don't you bring murder charges or seek a murder penalty against a woman who has an abortion? Why do you let her off, if you really believe it's murder?1

O'steen replied that "we have never sought criminal penalties against a woman," which isn't an answer but a re-statement of the reason for the question. When pressed, he added that we don't know "how she‘s been forced into this." When pressed again, O'steen abandoned these responses and tried to give a consequentialist answer. He claimed that implementing "civil penalties" and taking away the "financial incentives" of abortion doctors would more successfully "protect unborn children."

But this still doesn't answer the question. If you believe that killing a fetus is murder, then a woman seeking an abortion pays a doctor to commit murder. Why don't abortionists want to change the laws so that abortion is considered murder and a woman who has an abortion can be charged with paying a doctor to commit murder? Psychologist Robert Kurzban cites this as a classic case of moral rationalization.2

Pro-life demonstrators in Illinois were asked a similar question: "If [abortion] was illegal, should there be a penalty for the women who get abortions illegally?" None of them (on the video) thought that women who had illegal abortions should be punished as murders, an ample demonstration of moral rationalization. And I'm sure we can all think of examples where it looks like someone has settled on an intuitive moral judgment and then invented rationalizations later.3

More controversially, some have suggested that rule-based deontological moral judgments generally tend to be rationalizations. Perhaps we can even dissolve the debate between deontological intuitions and utilitarian intuitions if we can map the cognitive algorithms that produce them.

Long-time deontologists and utilitarians may already be up in arms to fight another war between Blues and Greens, but these are empirical questions. What do the scientific studies suggest?

continue reading »

How to enjoy being wrong

20 lincolnquirk 27 July 2011 05:48AM

Related to: Reasoning Isn't About Logic, It's About Arguing; It is OK to Publicly Make a Mistake and Change Your Mind.

Examples of being wrong

A year ago, in arguments or in thought, I would often:

  • avoid criticizing my own thought processes or decisions when discussing why my startup failed
  • overstate my expertise on a topic (how to design a program written in assembly language), then have to quickly justify a position and defend it based on limited knowledge and cached thoughts, rather than admitting "I don't know"
  • defend a position (whether doing an MBA is worthwhile) based on the "common wisdom" of a group I identify with, without any actual knowledge, or having thought through it at all
  • defend a position (whether a piece of artwork was good or bad) because of a desire for internal consistency (I argued it was good once, so felt I had to justify that position)
  • defend a political or philosophical position (libertarianism) which seemed attractive, based on cached thoughts or cached selves rather than actual reasoning
  • defend a position ("cashiers like it when I fish for coins to make a round amount of change"), hear a very convincing argument for its opposite ("it takes up their time, other customers are waiting, and they're better at making change than you"), but continue arguing for the original position. In this scenario, I actually updated -- thereafter, I didn't fish for coins in my wallet anymore -- but still didn't admit it in the original argument.
  • defend a policy ("I should avoid albacore tuna") even when the basis for that policy (mercury risk) has been countered by factual evidence (in this case, the amount of mercury per can is so small that you would need 10 cans per week to start reading on the scale).
  • provide evidence for a proposition ("I am getting better at poker") where I actually thought it was just luck, but wanted to believe the proposition
  • when someone asked "why did you [do a weird action]?", I would regularly attempt to justify the action in terms of reasons that "made logical sense", rather than admitting that I didn't know why I made a choice, or examining myself to find out why.

Now, I very rarely get into these sorts of situations. If I do, I state out loud: "Oh, I'm rationalizing," or perhaps "You're right," abort that line of thinking, and retreat to analyzing reasons why I emitted such a wrong statement.

We rationalize because we don't like admitting we're wrong. (Is this obvious? Do I need to cite it?) One possible evo-psych explanation: rationalization is an adaptation which improved fitness by making it easier for tribal humans to convince others to their point of view.

Over the last year, I've self-modified to mostly not mind being wrong, and in some cases even enjoy being wrong. I still often start to rationalize, and in some cases get partway through the thought, before noticing the opportunity to correct the error. But when I notice that opportunity, I take it, and get a flood of positive feedback and self-satisfaction as I update my models.

continue reading »

The Bias You Didn't Expect

92 Psychohistorian 14 April 2011 04:20PM

There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.

It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.

continue reading »

Epistemic Luck

74 Alicorn 08 February 2010 12:02AM

Who we learn from and with can profoundly influence our beliefs. There's no obvious way to compensate.  Is it time to panic?

During one of my epistemology classes, my professor admitted (I can't recall the context) that his opinions on the topic would probably be different had he attended a different graduate school.

What a peculiar thing for an epistemologist to admit!

Of course, on the one hand, he's almost certainly right.  Schools have their cultures, their traditional views, their favorite literature providers, their set of available teachers.  These have a decided enough effect that I've heard "X was a student of Y" used to mean "X holds views basically like Y's".  And everybody knows this.  And people still show a distinct trend of agreeing with their teachers' views, even the most controversial - not an unbroken trend, but still an obvious one.  So it's not at all unlikely that, yes, had the professor gone to a different graduate school, he'd believe something else about his subject, and he's not making a mistake in so acknowledging...

But on the other hand... but... but...

But how can he say that, and look so undubiously at the views he picked up this way?  Surely the truth about knowledge and justification isn't correlated with which school you went to - even a little bit!  Surely he knows that!

continue reading »

False Majorities

35 JamesAndrix 03 February 2010 06:43PM

If a majority of experts agree on an issue, a rationalist should be prepared to defer to their judgment. It is reasonable to expect that the experts have superior knowledge and have considered many more arguments than a lay person would be able to. However, if experts are split into camps that reject each other's arguments, then it is rational to take their expert rejections into account. This is the case even among experts that support the same conclusion.

If 2/3's of experts support proposition G , 1/3 because of reason A while rejecting B, and 1/3 because of reason B while rejecting A, and the remaining 1/3 reject both A and B; then the majority Reject A, and the majority Reject B. G should not be treated as a reasonable majority view.

This should be clear if A is the koran and B is the bible.

continue reading »

Experiential Pica

80 Alicorn 16 August 2009 09:23PM

tl;dr version: Akrasia might be like an eating disorder!

When I was a teenager, I ate ice.  Lots of ice.  Cups and cups and cups of ice, constantly, all day long, when it was freely available.  This went on for years, during which time I ignored the fact that others found it peculiar. ("Oh," I would joke to curious people at the school cafeteria, ignoring the opportunity to detect the strangeness of my behavior, "it's for my pet penguin.")  I had my cache of excuses: it keeps my mouth occupied.  It's so nice and cool in the summer.  I don't drink enough water anyway, it keeps me hydrated.  Yay, zero-calorie snack!

Then I turned seventeen and attempted to donate blood, and was basically told, when they did the finger-stick test, "Either this machine is broken or you should be in a dead faint."  I got some more tests done, confirmed that extremely scary things were wrong with my blood, and started taking iron supplements.  I stopped eating ice.  I stopped having any interest in eating ice at all.

Pica is an impulse to eat things that are not actually food.  Compared to some of the things that people with pica eat, I got off very easy: ice did not do me any harm on its own, and was merely a symptom.  But here's the kicker: What I needed was iron.  If I'd been consciously aware of that need, I'd have responded to it with the supplements far earlier, or with steak1 and spinach and cereals fortified with 22 essential vitamins & minerals.  Ice does not contain iron.  And yet when what I needed was iron, what I wanted was ice.

What if akrasia is experiential pica?  What if, when you want to play Tetris or watch TV or tat doilies instead of doing your Serious Business, that means that you aren't going to art museums enough, or that you should get some exercise, or that what your brain really craves is the chance to write a symphony?

continue reading »

View more: Next