It seems to me that there are two kinds of human irrationality. One could be called "bug" irrationality, not referring to insects but rather bugs in the design of our minds, ways in which our minds could be better designed. This category includes things like hyperbolic discounting (also called myopic discounting), as well as general failures to correctly apply laws of logic and probability. It's often worth making an effort to correct for this kind of irrationality, but I think some of the discussion of it is overly pessimistic. From an evolutionary point of view, the main reason that this kind of irrationality exists is probably just that flawed rules of thumb which usually work out okay can be more efficient than more rigorous methods.

As Yvain once wrote, "most people are rational enough for their own purposes." Because of that, I don't think this kind of rationality is our biggest worry, and it's not what this post is about. But if you want to do much more reading on the view of this side of irrationality that I've just sketched, I recommend reading various papers by psychologist Ricard Samuels and philosopher Stephen Stich, such as Ending the Rationality Wars, Rationality & Psychology, and Reason and Rationality.

The worst examples of human irrationality, in my view, are what could be called "feature" irrationality. Meaning, when irrationality is a feature of our mind, something evolution designed into our minds. Why would evolution do this? Here is Stephen Pinker's explanation from How the Mind Works:

[Psychologist Robert] Trivers, pursuing his theory of the emotions to its logical conclusion, notes that in a world of walking lie detectors the best strategy is to believe your own lies...

Everyone has heard of "reducing cognitive dissonance," in which people invent a new opinion to resolve a contradiction in their minds. For example, a person will recall enjoying a boring task if he had agreed to recommend it to others for paltry pay... As originally conceived of by the psychologist Leon Festinger, cognitive dissonance is an unsettled feeling that arises from an inconsistency in one's beliefs. But that's not right: there is no contradiction between the proposition "The task is boring" and the proposition "I was pressured into lying that the task was fun." Another social psychologist, Eliot Aronson, nailed it down: people doctor their beliefs only to eliminate contradiction with the proposition "I am nice and in control." Cognitive dissonance is always triggered by blatant evidence that you are not as beneficent and effective as you would like people to think. The urge to reduce it is the urge to get your self-serving story straight (pp. 421-423).

In other words, we have evolved to have an irrationally inflated view of ourselves, so as to better sell others on that view. I can think of one other important advantage of irrationality as a feature: coalition building. A coalition may be strengthened by a belief in its own righteousness and the wickedness of its enemies, and group members can signal loyalty by adopting an ideological shibboleths that group members share.

The tragedy of feature-irrationality is that (unsurprisingly, on Darwinian grounds) its costs tend to be borne by other people. Throughout history, many people have believed that martyrs receive rewards in paradise far exceeding any earthly rewards, but only a very few of those people actually become martyrs. Thus, the harm done in practice by that belief about martyrs is relatively small. Much greater harm has been done by the belief that unbelief leads to eternal damnation, and therefore unbelievers should be tortured and killed, both to punish and discourage unbelief.

But my purpose here isn't to rail against the evils of this kind of irrationality. Rather, my purpose is to suggest a relatively simple method for avoiding the worst of human irrationality. The core idea is one articulated by eugman:

Watch out for when you are sacrificing epistemology for instrumental gains. If there is ever a time where you want to have certain beliefs because it more convenient and you are trying to talk yourself into them, that is a giant red flag.

Since feature-irrationality is all about sacrificing truth for the sake of instrumental gains, being aware of when you're doing that is the very essence of combating it. And since the instrumental gains are usually in terms of other people's view of us, we can be more specific: "When you're trying to figure out what to believe, you can't care what other people will think of you."

There are a couple of misunderstandings that need to be avoided here. First, the rule is to not care about what other people will think of you, not not care about what other people think in general. In fact, when you're trying to figure out what to think about X, it's generally important to take into account what other well-informed people think about X. If you know that 99% of the experts think that p, you'd be wise to be very, very cautious about concluding not-p. But it's important to do this for the right reason. The right reason is that other people might know or understand something you don't. It would still be a mistake to be swayed by fears that high-status people will think less of you if you disagree with them.

Furthermore, you have to really not care what other people will think of you. It does nothing--and can even be counterproductive--to merely make a show of indifference. When someone makes a show of indifference to others' opinions of them, it's often a sign they care intensely about what other people think. Pinker observes, "The physicist Richard Feynman wrote two books describing how brilliant, irreverent, and admired he was and called one of them What Do You Care What Other People Think?" (Pinker, p. 361) Going too far in not caring what other people think can be a kind of countersignaling or even, in extreme cases, what art historian Quentin Bell called "conspicuous outrage."

So in order to follow the rule "When you're trying to figure out what to believe, you can't care what other people will think of you," you can't worry that you'll lose status for advocating an unpopular idea, but you also can't get too excited about being an intellectual rebel or contrarian.

Had I followed this principle, I might have managed to avoid at least a couple of the more embarrassing mistakes I've made in my life. Here's one of them: I started off in college planning on going to medical school, which was not a good idea. Once I got sufficiently tired of my biology classes, I looked around to see what I could do with my life, and noticed I was doing really well in my philosophy classes. So an obvious choice (or so it seemed to me) was going to graduate school in philosophy.

However, there was a problem: my initial contact with philosophy left me with a somewhat dim view of the field, or at least a dim view of academic philosophy. So I resisted the idea of going to graduate school in philosophy for a long time. But eventually, seeing no other options (according to my peculiar, upper-middle class notions about what constituted an "option") I gave in. Once I'd resigned myself to going to graduate school in philosophy, I began to worry about how I would justify my choice to others, and began thinking nicer thoughts about philosophy.

When I ultimately left my Ph.D. program after three semesters, it took awhile to adjust to realizing how foolish I'd been. I could have saved myself a lot of time if I'd stopped and noticed, "no, you don't actually think this is a great idea, you're just trying to imagine how you'll justify this decision to other people."

New to LessWrong?

New Comment
9 comments, sorted by Click to highlight new comments since: Today at 9:52 AM

This is somewhat tangential, but I am bothered by that Pinker quote:

Another social psychologist, Eliot Aronson, nailed it down: people doctor their beliefs only to eliminate contradiction with the proposition "I am nice and in control."

It seems too precise to me. It does not seem to fit very well Festinger's original example or the example of this post. Chris's example emphasized other people's opinions. Festinger likewise emphasized the differential response of the cultists who were together on doomsday with those who were apart; and he emphasized not just confabulation, but the shift to evangelism. I am sure Aronson made an important contribution to identify the strongest sources of dissonance. Probably he jumped from there to dismissal of all other sources. Maybe he really has a full theory of dissonance, but I don't believe it's this one line.

With the cultists, admitting they were totally wrong would show they weren't in control (or effective). Finding some way to rationalize their failed prediction and show that they were really right all along preserves their outward picture of themselves as in control and effective.

The link between Aronson line and other people's opinions is just that we're trying to sell other people on the belief that "I am nice and in control."

Hmmm... maybe should edit the post to make that second part clearer.

Maybe we care not just about believing that we're "nice and in control," but about others believing it, but that isn't Aronson's theory (as summarized by Pinker). Also, that wasn't Festinger's interpretation of the need for publicity, which was a change from before doomsday.

Um, have you read Pinker? I may have cut out too much in my quote. Pinker is pretty clear that our reason for being biased towards thinking we're "nice and in control" is (from an evolutionary perspective, at least) so that we can better convince others of that.

But fair point about Festinger and the need for publicity.

If I am concerned with instrumental rationality, why should I try to avoid believing untrue things if believing those untrue things is to my benefit?

I can't give a simple answer to that question.

One reason might be if you, personally, happen to care a lot about the truth.

Another is that while there can be benefits to being irrational in this way, sometimes you pay a price, when you can't admit to yourself you're making a questionable decision.

Another is that your irrational beliefs can hurt other people even when they don't hurt you much.

But yeah, there are probably situations where letting yourself believe something untrue is actually the right thing to do.

I think this might be a case where double-think is useful: let your mind lightly touch on the probable outcomes of various beliefs, then settle heavily on the most useful one. This might be advantageous to society as well as just an individual. To quote Terry Pratchett in Hogfather:

"All right," said Susan, "I'm not stupid. You're saying humans need ... fantasies to make life bearable."
"NO. HUMANS NEED FANTASY TO BE HUMAN. TO BE THE PLACE WHERE THE FALLING ANGEL MEETS THE RISING APE."
"Tooth fairies? Hogfathers?"
"YES. AS PRACTICE. YOU HAVE TO START OUT LEARNING TO BELIEVE THE LITTLE LIES."
"So we can believe the big ones?"
"YES. JUSTICE. DUTY. MERCY. THAT SORT OF THING."

It doesn't get better if you keep in mind that you may well be foolish. Your worries just go a level up, at "Should you admit you've been foolish, or is this actually a great idea you refuse to stick with because you want to show you can admit you've been foolish?".

One could be called "bug" irrationality, not referring to insects but rather bugs in the design of our minds, ways in which our minds could be better designed. This category includes things like hyperbolic discounting (also called myopic discounting), as well as general failures to correctly apply laws of logic and probability.

I immediately thought this should be called feature irrationality, before I reached the other part. In fact, I would probably call both of them feature irrationality, and something like schizophrenia bug irrationality. Even cognitive dissonance could be seen as a heuristic optimizing for approximately true beliefs.