I - Intuitions And Reason

They say cognitive biases are what other people have. In that vein, “The Righteous Mind” by Jonathan Haidt is a book about how other people think. Specifically, how they think about morality and make moral judgments and choices. But it also covers the interplay between intuitions and reason, community building, group selection, the importance of culture and religion, eating dog carcasses, and much more.

The first part of “The Righteous Mind” could hold its own as a book about thinking and rationality, but it’s disguised as being about moral psychology. It’s about how people actually form opinions and make choices, in a way that is very different from the ideal Bayesian decision maker. It centers on moral psychology, but applies just as well to almost any issue people care deeply about: politics, religion, group affiliation, etc. For me, it provided a fresh viewpoint on rationality, which has stuck with me ever since.

Like any book about rationality - it needs a central catchy metaphor for how the brain works. Here this metaphor is that of a rider on an elephant. Imagine a man riding on the back of an elephant. The rider can say “turn right!” but if the elephant wants to go left, that’s where they’re going. All the rider can do is say “Yes, we meant to go left. Left is the best choice. Here’s why.” The rider is conscious reasoning, and the elephant is emotions, intuitions and everything else we aren’t even always aware of. Emotions and intuitions make the decision where to turn almost instantaneously, and our conscious reasoning has to go along for the ride.

I feel like this captures incredibly well my - sorry, other people’s - thought process on many topics. Suppose I read an article which has a reasoned argument. Right at the start, picking up on subtle cues about the opinion expressed, but also on the various signals the language and tone of the article emit about the writer’s tribe and other views, the elephant (my unconscious emotions) leans towards either “accept” or “reject”. Accordingly, the rider (conscious reasoning) either tries to believe and support the arguments, or fight them internally with everything he’s got. If I want to accept, I ask “can I believe it”; to reject, I ask “must I believe it”. Sometimes, when I’m especially intellectually honest, I can concede an argument and say “this argument is true, but I’m still not swayed about the general issue.” Whew, attack parried.

If you thought a human rider would have a hard time moving the elephant, imagine how hard it must be for a stick figure.

We know this as confirmation bias. But is it really only about confirmation of existing opinions? I seem to have this reaction to a great many issues, on which I did not previously have an opinion. And how is the initial opinion which is later confirmed even formed in the first place - is it just randomly drawn when we first hear of an issue? Clearly not, since you can predict someone’s views on an unseen issue from their views on other issues.

This metaphor gives a much neater explanation. It’s not that whichever view exists is confirmed - this is just a symptom. Rather, whichever view has the intuitive, emotional appeal to someone - that view will be supported and confirmed with reasoning. This might look like confirmation of an existing opinion, but only because this process has run before, the first time the person was exposed to the issue, and resulted in the same outcome - support for the opinion the elephant preferred. So it seems like the existing opinion is confirmed, when instead what is confirmed each time is the elephant’s choice, the emotional valence. It’s motivated reasoning, not confirmation bias.

In other words, it’s not that we’re Bayesian reasoners with this overlaid quirk that we seek information that confirms our prior opinions, and so are stuck feeding more and more confirmatory evidence into our perfect Bayesian machine and deepening our certainty. It’s that we don’t have a Bayesian machine at all! We form the opinions some other way, and then only use reason to justify them to others.

If catchy metaphors are not up to your usual standard of evidence, perhaps I can interest you in some academic studies. In one, researchers had subjects make moral judgments under a cognitive load, such as memorizing the number 749204 (compared to memorizing only 7). That increase in cognitive load did not change subjects’ judgements. Another manipulation - making subjects wait for ten seconds before giving their judgment - did not change the judgments either. This, according to Haidt, means

We can conclude that “automatic” processes (such as intuition and emotion) are sufficient for performing that task.

The most brilliant experiment in my view used hypnosis. The experimenter, Thalia Wheatley, used vignettes which portrayed moral dilemmas and had subjects rate how morally wrong they were. But with a twist.

Thalia hypnotized people to feel a flash of disgust whenever they saw a certain word (“take” for half of the subjects; “often” for the others). While they were still in a trance Thalia instructed them that they would not be able to remember anything she had told them, and then she brought them out of the trance. Once they were fully awake, we asked them to fill out a questionnaire packet in which they had to judge six short stories about moral violations. For each story, half of the subjects read a version that had their hypnotic code word embedded in it. For example, one story was about a congressman who claims to fight corruption, yet “takes bribes from the tobacco lobby.” The other subjects read a version that was identical except for a few words (the congressman is “often bribed by the tobacco lobby”). On average, subjects judged each of the six stories to be more disgusting and morally wrong when their code word was embedded in the story. That supported the social intuitionist model. By giving people a little artificial flash of negativity while they were reading the story, without giving them any new information, we made their moral judgments more severe.

The real surprise, though, came with a seventh story we tacked on almost as an afterthought, a story that contained no moral violation of any kind. It was about a student council president named Dan who is in charge of scheduling discussions between students and faculty. Half of our subjects read that Dan “tries to take topics that appeal to both professors and students in order to stimulate discussion.” The other half read the same story except that Dan “often picks topics” that appeal to professors and students. We added this story to demonstrate that there is a limit to the power of intuition. We predicted that subjects who felt a flash of disgust while reading this story would have to overrule their gut feelings. To condemn Dan would be bizarre.

Most of our subjects did indeed say that Dan’s actions were fine. But a third of the subjects who had found their code word in the story still followed their gut feelings and condemned Dan. They said that what he did was wrong, sometimes very wrong.

Obviously a feeling of disgust doesn’t change the utilitarian moral calculus of a situation, so the fact that the judgments change is strong evidence that this is not how people make their judgments, even if this is how they later justify them.

This study was followed by many other studies which we’ve come to expect from social psychology - studies that let people make moral judgments while within sniffing distance of fart spray, or after drinking a bitter drink (compared to a sweet drink), or while being in a dirty room. They found similar results: feelings of disgust make moral judgments more severe. Also as we’ve come to expect from social psychology mentioned in a book from 2012, these results do not replicate.

In fact, even the hypnosis study is underpowered (64 subjects) and many of the effects are insignificant. As far as I can tell, there has been no attempt to replicate it so far, but I do wish someone would try to repeat it with a larger sample. I have to admit there’s a good chance it won’t hold up either. So maybe it’s not as simple as moral judgments just being a feeling of disgust that we can easily trigger using an external stimulus. I’ll be honest though - I’m pretty convinced that it’s really about intuitions and not conscious reasoning, whether these specific studies find these results or not. My elephant has spoken.

The rider being powerless to change the actual decision, he chooses to give good justifications for it. The rider is like the White House press secretary. She can give justifications for decisions, but she is not the one making the decisions. You’ll never hear the White House press secretary answer in a briefing “that’s a good argument, we haven’t thought of that, and we’ll reconsider the policy” - it’s not within her purview. Here is how people reacted to Dan of the student council in the hypnosis experiment.

Fortunately, we had asked everyone to write a sentence or two explaining their judgments, and we found gems such as “Dan is a popularity-seeking snob” and “I don’t know, it just seems like he’s up to something.” These subjects made up absurd reasons to justify judgments that they had made on the basis of gut feelings — feelings Thalia had implanted with hypnosis.

In fact, Haidt claims conscious reasoning did not evolve to form true opinions, but rather to justify to others whichever opinions were beneficial to their holder:

Bizarre and depressing research findings make perfect sense once you see reasoning as having evolved not to help us find truth but to help us engage in arguments, persuasion, and manipulation in the context of discussions with other people. As they put it, “skilled arguers … are not after the truth but after arguments supporting their views.” This explains why the confirmation bias is so powerful, and so ineradicable.

I have had this experience multiple times, for example in arguments with my spouse. I come up with all these very strong arguments, but then when the fight subsides they don’t seem as compelling, and sometimes are clearly flawed. My opinion was not determined by the arguments - the arguments were determined by the opinion. They are not reasons, but post-hoc fabrications. And we know the mind is very good at creating those, because of those split-brain confabulation experiments where people make up reasons for their actions because the hemisphere supplying the reasons was detached from the hemisphere exposed to the actual reason and making the decision. A personal takeaway is that explicitly asking people for the reasons why they believe something is pretty useless - they would just give their post-hoc fabrications (or create them on the spot especially for me!). Another personal takeaway is that fighting with my spouse isn’t fun and I should really do it less (and if I do - rational arguments are probably not the way to go). But this also explains some of the frustration in trying to have rational discussions with people on these topics.

We’ve all had this experience of trying to convince someone to change their mind, and them just not listening to reason. What actually happened is that we were aiming our arguments at the rider, who is not the one making the choice - only the one giving the reasoned justifications. I have a personal rule when I try to convince someone and refute their arguments. If at some point their arguments become really bad - I know I’ve lost. Because then I know it’s not really the arguments that determined their opinion. It’s something else that my arguments cannot reach. As the saying goes, “you can’t talk someone out of something they haven’t been talked into.” 

(Similarly, psychotherapy that doesn’t work is wasting its time talking to the rider about things that the elephant decides. Non-Violent Communication gets this, and encourages people to talk only about their feelings and needs, not their rational reasons.)

Where does that leave reasoned debate? If the arguments aren’t really what convinces people, what other avenues are there? By this view, if we really want to persuade someone, we should definitely NOT say “Look at this table, the benefits of the policy exceed the costs, you should support.” Instead, we should whisper a message to the elephant: “Many people support this policy. The cool kids / popular people / your Team / your Tribe support it. You would gain their favor by supporting it. I’m your friend, you can trust me, and I support it.” Or whatever other signals the elephant picks up on, which are often non-verbal and non-explicit. Advertising in fact works very much like this. It’s never “Coca Cola - because the tastiness is worth the health costs!” It’s always a visual image which makes you implicitly associate Coca Cola with being cool / beautiful / exciting. Not always - sometimes they do advertise a laundry detergent by saying it costs less and cleans better. But even that is very rarely done using text on a blank background - it always tries to be emotionally appealing. And if that’s how we make relatively emotionally neutral choices like laundry detergent, think how much worse it must be with politics / religion / morality.

Just look at this whirlwind of bubbles - surely the chemical formula behind them has been rigorously tested to assure optimal cost-effective washing with minimal environmental damage!

This is also true, I feel, when persuading not other people but myself. I think reason is in charge, but performing some things that are clearly rationally for my benefit proves very hard. Sometimes, exerting great effort, I can ignore the elephant or make it do what the rider thinks is right against its wishes. But it’s a great effort indeed, I never enjoy it, and can’t do it very often without making my life full of suffering. And it’s because I’m still under the illusion that the rider is in control. 

Now, the elephant is not bad at all. It lets you make split-second decisions a thousand times a day, and they’re by and large very good. If I had to do the math every time I bought a laundry detergent I’d starve in the supermarket. Heuristics are essential. It’s only with specific large choices which are not intuitive that we might need conscious reasoning.

I do have to say I think Haidt goes a little too far. If reason is just for showing our group is right, why do we need it at all? Can't we say "I'm loyal to the group, irrespective of arguments"? Why play this game where we try to convince others we're objectively right, not just having different preferences? Also, to convince someone to support our coalition we need to reason to know how this can be portrayed as good for them - if we just used it to support our priors we would fail. And if we really only used reason to justify our priors, how come it has been able to advance science and we aren’t stuck in the Dark Ages? It sounds like an argument which proves too much - that people never change their mind on anything morally or politically or socially charged.

So what is Haidt’s response to this? He is very skeptical of individual rationality. He goes as far as to call it a “delusion”. But he doesn’t give up on reasoning entirely. He has an interesting take, where rationality can emerge from a group of reasoners:

I’m not saying we should all stop reasoning and go with our gut feelings. Gut feelings are sometimes better guides than reasoning for making consumer choices and interpersonal judgments, but they are often disastrous as a basis for public policy, science, and law. Rather, what I’m saying is that we must be wary of any individual’s ability to reason. [...] Each individual reasoner is really good at one thing: finding evidence to support the position he or she already holds, usually for intuitive reasons. We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system. 

This is at least an interesting idea, that rationality emerges from a group of biased reasoners. His description of the group sounds suspiciously like academia, which does in fact (in most fields) advance towards the truth. But it needs an extra ingredient. Why would someone’s reputation depend on them making the correct reasoning? Why wouldn’t it depend on them supporting the conclusions which affirm the group dogma (as happened and happens in many places, including some branches of academia)? Good reasoning needs to be central to the group. You need the identity of the group to be about good reasoning. You need to make rationality, and being correct, high status - status is something the elephant cares about. In communities and groups where this happened (like science, or forecasting) indeed reason can do wonderful things.

II - Moral Foundations Theory

In a sense all of this is not too surprising. I’ve always known that other people don’t make their moral judgments using maximum expected utility, otherwise moral arguments would sound very different than they do. Surely people have some heuristics for deciding moral questions. But I never took the extra step of thinking what they might be. Haidt makes a valiant effort to try and reconstruct these heuristics people use, the so-called Moral Foundations. They are clusters of moral intuitions. If the name wasn’t already taken, it should have been called “the theory of moral sentiments.” Haidt claims the same foundations are found to varying extents in almost any culture, because the intuitions have a strong innate component, which socialization can help aim in its desired direction.

Are these heuristics even interesting? Aren’t they an arbitrary collection of culturally-relative conventions? I used to think, like WVO Quine, that we learn concepts purely from generalizing their repeated usage. And “wrong” is something children learn to apply to situations after hearing it many times from adults and other kids. But the secret sauce is in the generalizing - there are innate (“organized in advance of experience”) mental modules for the Moral Foundations, and we learn to apply the word “wrong” to some or all of them. There are studies that show children make a distinction between things which are conditionally and unconditionally wrong. They recognize that coming to school without a school uniform is wrong, but would say it’s not wrong if a teacher allowed it, or if that school doesn’t require a uniform. However, for something like pushing a girl off a swing, they would say it’s wrong, universally wrong (even if that school has no rule against pushing children off swings), and unconditionally wrong (even if a teacher allowed it). So they have these notions or modules for things which feel wrong, and society interfaces with these modules to produce the specific morality of that culture. For example, all children could have the cognitive module to intuit that causing pain is wrong. Their culture can then build on it and teach children that slaughtering sheep is ok, but kicking a baby is wrong. No child would make the mistake of thinking that slapping a baby is ok (“it’s not kicking!”), because their innate mental module recognizes that the important attribute here is the pain of the baby, not the limb used to cause it.

So now for the big reveal - what are these Moral Foundations? Let’s go over them one by one with examples. Keep in mind that the examples don’t have to be morally wrong according to your final judgment, some just have to feel intuitively wrong, even for an instant. You can think about it as the things that film-makers make sure to show you about a character in the first minute after it appears - so you know they’re the villain, and not to be sympathized with. They could harm someone innocent, or betray their friends or family, etc.

Also keep in mind that there could be a utilitarian justification for some of these foundations, but intuitions come first, even if expected utility and game theory are the reasons these intuitions evolved, which we’ll get into later.

Care / harm

We should care for other people and not harm them, especially the weak. This one is the most intuitive to many people. If an action harms people (sometimes also animals), it is morally wrong. If a group of people gang up on a woman, and beat her unconscious for the fun of it - that’s wrong. They are violating the care foundation. Helping a blind person cross the street is morally good. As we said, many people seem to think care / harm is the only admissible foundation for something being morally right or wrong. Now, harm is not a synonym for “lowered total expected utility”. This foundation is triggered especially by direct bodily harm, a thing which we have intuitions about. There are films where the hero is a bank robber, but not where the hero is a child abuser. Probably because bank robbery is an indirect harm. Similarly, we have to force ourselves to realize that tax evasion causes more damage than car theft.

For you, Care / harm (and its logical extrapolation) might be the only moral foundation. But remember, this is a book about how other people think. To get at their moral foundations, Haidt carefully created a few vignettes which seem morally wrong, but where there is no harm to anyone. Here are four of them.

Julie and Mark are brother and sister. They are traveling together in France on summer vacation from college. One night, they are staying alone in a cabin near the beach. They decide that it would be interesting and fun if they tried making love. At the very least, it would be a new experience for each of them. Julie was already taking birth control pills, but Mark uses a condom, too, just to be safe. They both enjoy making love, but they decide never to do it again. They keep that night as a special secret, which makes them feel even closer to each other.

A family’s dog was killed by a car in front of their house. They had heard that dog meat was delicious, so they cut up the dog’s body and cooked it and ate it for dinner. Nobody saw them do this.

A man goes to the supermarket once a week and buys a chicken. But before cooking the chicken, he has sexual intercourse with it. Then he cooks it and eats it.

A woman is cleaning out her closet, and she finds her old American flag. She doesn’t want the flag anymore, so she cuts it up into pieces and uses the rags to clean her bathroom.

They tested these vignettes on US college students. Many subjects in fact said these were morally wrong. But when asking subjects to justify their judgments, they found something interesting. Subjects have several moral foundations, but when they need to justify their moral judgments, they seem to think only two of them are admissible - harm, and (less commonly) fairness. When asked to justify their judgments, they didn’t say “it’s just wrong - it’s degrading and disgusting and that’s the reason” but tried to invent victims in elaborate ways.

This is in contrast to people in less WEIRD cultures, or even working class subjects in the US, who were more comfortable condemning an action as morally wrong without a harm justification, or even without any reasoned justification at all.

Fairness / cheating

Abide by the rules, don’t cheat. Someone who says he is collecting donations for a charity, but at the end of the day takes it to himself is violating the Fairness foundation. A runner taking a shortcut on the course during the marathon in order to win is violating the Fairness foundation. Even if it’s an amateur race and nothing is at stake, and even if the runner would benefit more from the victory than the others. It’s just wrong to cheat.

You can ask where these rules that you should abide by come from, and there could be disagreement which makes people judge fairness differently. For example, if you’re against the concept of individual property, maybe theft is not a fairness violation. But regardless, if something is perceived to break the agreed-upon rules, it is judged as wrong.

Loyalty / betrayal

Be loyal to your in-group, don’t betray the group or its people. A man leaving his family business to go work for their main competitor violates the loyalty foundation. So is a man secretly voting against his wife in a local beauty pageant. To really trigger this one, you have to find your in-group. For example, I think of how betrayed and angry I felt when doctors or scientists came out against COVID vaccines or denied COVID was dangerous, using pseudo-scientific arguments and bad statistics but mostly their status, thus giving fodder to crazy conspiracy theorists and anti-vaxxers. For you, this could be women who disavow feminism, for example.

As a side note, Haidt gives the best explanation I’ve ever heard for the popularity of sports.

Much of the psychology of sports is about expanding the current triggers of the Loyalty foundation so that people can have the pleasures of binding themselves together to pursue harmless trophies.

Being a sports fan feels good because it’s a loyalty rush. That’s why people want to be in the stadium, even if it’s cold and they actually see the plays in a way that’s inferior to their TV at home. It’s not about the plays, it’s about being a fan with others. More on this later.

Authority / subversion

Respect authority, do not subvert it or deny it. A woman refusing to stand when the judge walks into the courtroom is violating the authority foundation. So is someone who, on his first day at a job, loudly proclaims that his managers don’t understand their business and he has a better business plan. It used to be that a sports star refusing to kneel during the national anthem would have been a good example, but that has gotten into the culture war blender and now it’s just another tribal marker. Again, to really trigger this one, you have to find a true figure of authority for you. How about someone interrupting a scientific conference on genetics with screams that the study of genetics is racist? How about slapping your father (with his permission) as part of a comedy skit? If not, forget it, it’s all a theory about other people’s moral intuitions. But if you feel a pang saying it’s wrong, but think “well, it’s because it harms this or that person” then hold off on the justification.

Sanctity / degradation

Some things are sacred and should not be defiled. They have value that is not just instrumental, and degrading them with things that are unclean is wrong. One of those is your body - it’s not just a slab of meat. The vignettes above about eating the carcass of the family dog, and the sex with the chicken, are violations of the sanctity foundation. Someone peeing on a tombstone is grossly violating the sanctity foundation. So is someone spraying graffiti on Half Dome. If you want to trigger this foundation, you need to find what is sacred to you. Churches might not activate Sanctity for liberals, but pristine forests could. For me, it’s very hard to scribble in a book, and tearing out a page is unthinkable, because books hold some sacred value to me. Religion is full of sanctity of course, so speaking the name of God inside a toilet is forbidden, or even placing a bible on the floor.

The Sanctity foundation is used most heavily by the religious right, but it is also used on the spiritual left. You can see the foundation’s original impurity-avoidance function in New Age grocery stores, where you’ll find a variety of products that promise to cleanse you of “toxins.” And you’ll find the Sanctity foundation underlying some of the moral passions of the environmental movement. Many environmentalists revile industrialism, capitalism, and automobiles not just for the physical pollution they create but also for a more symbolic kind of pollution—a degradation of nature, and of humanity’s original nature, before it was corrupted by industrial capitalism.

Why not accept oligarchs' money for good causes? Because of the Sanctity foundation! Their dirty money would defile your charity.

This strange blue glow cannot disinfect your dirty money!

(There’s a much more comprehensive set of 132 scenarios and where they fall on the different Moral Foundations here.)

I don’t think this list of Moral Foundations is exhaustive. To me personally, for example, wasting food feels wrong. Which moral foundation covers this? (The Ashkenazi moral foundation?) None of them, I think. It’s just an intuition I have. Still, the foundations are helpful in cataloging many moral intuitions. Haidt himself later tacked on an additional foundation for Liberty/oppression.

III - What Is Morality Good For?

All of these moral foundations seem suspiciously like norms that are beneficial for a community to have. The Care / harm foundation makes us care for others - often those helped gain more than the helpers lose, so the community at large is better off. The Fairness / cheating foundation makes us punish cheaters, liers and swindlers, so they don’t swindle anyone else again. The Loyalty / betrayal foundation makes us encourage those who help the group and punish those who betray it, contributing to its survival. The Authority / subversion foundation helps us navigate hierarchical social structures, and so construct more stable hierarchies. The Sanctity / degradation foundation unites us around the same sacred objects and in the same sacred customs.

This is true not just for these foundations. Anyone who has looked at moral norms with fresh eyes recognizes the very strong correlation between what is considered moral and what helps a society thrive. There’s no standard moral imperative to make as much money as you can or be as happy as you can, or learning to whistle - otherwise morality would be fun. Instead, it’s about suppressing the individual interest in favor of the group. 

In other words, morality confers a strong group-level advantage. This advantage must be pretty strong, because morality is everywhere - we never see a society without it. But this leads us to a conundrum.

When groups compete, the cohesive, cooperative group usually wins. But within each group, selfish individuals (free riders) come out ahead. They share in the group’s gains while contributing little to its efforts. The bravest army wins, but within the bravest army, the few cowards who hang back are the most likely of all to survive the fight, go home alive, and become fathers.

In the same vein, my community might be better off if every time I discover someone has swindled me for five bucks, I challenge them to a duel. But I won’t survive very long before I die in a duel, leaving no offspring to have the same tendency. Evolution works at the individual level (or at the kin level at most), where genes are shared, not at the community level. Which means this tendency will be strongly selected against. So how did we develop these strong tendencies? Some scholars think it’s a byproduct.

These are just misfirings of ancient systems designed for life in the small groups of the Pleistocene, where most people were close kin. Now that we live in large anonymous societies, our ancient selfish circuits erroneously lead us to help strangers who will not help us in return. Our “moral qualities” are not adaptations, as Darwin had believed. They are by-products; they are mistakes. Morality, said Williams, is “an accidental capability produced, in its boundless stupidity, by a biological process that is normally opposed to the expression of such a capability.” Dawkins shared this cynicism: “Let us try to teach generosity and altruism because we are born selfish.”

This is Haidt’s short answer:

In a real army, which sacralizes honor, loyalty, and country, the coward is not the most likely to make it home and father children. He’s the most likely to get beaten up, left behind, or shot in the back for committing sacrilege. And if he does make it home alive, his reputation will repel women and potential employers.

And here is his long and interesting answer, which is worth quoting at length.

For the first billion years or so of life, the only organisms were prokaryotic cells (such as bacteria). Each was a solo operation, competing with others and reproducing copies of itself.

But then, around 2 billion years ago, two bacteria somehow joined together inside a single membrane, which explains why mitochondria have their own DNA, unrelated to the DNA in the nucleus. [...] Cells that had internal organelles could reap the benefits of cooperation and the division of labor (see Adam Smith). There was no longer any competition between these organelles, for they could reproduce only when the entire cell reproduced, so it was “one for all, all for one.” Life on Earth underwent what biologists call a “major transition.” Natural selection went on as it always had, but now there was a radically new kind of creature to be selected. There was a new kind of vehicle by which selfish genes could replicate themselves. Single-celled eukaryotes were wildly successful and spread throughout the oceans.

A few hundred million years later, some of these eukaryotes developed a novel adaptation: they stayed together after cell division to form multicellular organisms in which every cell had exactly the same genes. [...] Once again, competition is suppressed (because each cell can only reproduce if the organism reproduces, via its sperm or egg cells). A group of cells becomes an individual, able to divide labor among the cells (which specialize into limbs and organs). A powerful new kind of vehicle appears, and in a short span of time the world is covered with plants, animals, and fungi. It’s another major transition.

Major transitions are rare. The biologists John Maynard Smith and Eörs Szathmáry count just eight clear examples over the last 4 billion years (the last of which is human societies). But these transitions are among the most important events in biological history, and they are examples of multilevel selection at work. It’s the same story over and over again: Whenever a way is found to suppress free riding so that individual units can cooperate, work as a team, and divide labor, selection at the lower level becomes less important, selection at the higher level becomes more powerful, and that higher-level selection favors the most cohesive superorganisms. [emphasis mine] [...] As these superorganisms proliferate, they begin to compete with each other, and to evolve for greater success in that competition. This competition among superorganisms is one form of group selection. There is variation among the groups, and the fittest groups pass on their traits to future generations of groups.

Major transitions may be rare, but when they happen, the Earth often changes. Just look at what happened more than 100 million years ago when some wasps developed the trick of dividing labor between a queen (who lays all the eggs) and several kinds of workers who maintain the nest and bring back food to share. This trick was discovered by the early hymenoptera (members of the order that includes wasps, which gave rise to bees and ants) and it was discovered independently several dozen other times (by the ancestors of termites, naked mole rats, and some species of shrimp, aphids, beetles, and spiders). In each case, the free rider problem was surmounted and selfish genes began to craft relatively selfless group members who together constituted a supremely selfish group.

[...] The colonial insects represent just 2 percent of all insect species, but in a short period of time they claimed the best feeding and breeding sites for themselves, pushed their competitors to marginal grounds, and changed most of the Earth’s terrestrial ecosystems (for example, by enabling the evolution of flowering plants, which need pollinators). Now they’re the majority, by weight, of all insects on Earth.

Okay, that was a cool biology lesson, but what does it have to do with the topic at hand? Why would this group-level advantage be selected for, instead of defectors and free-riders benefiting? In a sense, he claims, we are like bees. Not perfectly - obviously most of us produce offspring, not just the queen - but enough that it matters.

What about human beings? Since ancient times, people have likened human societies to beehives. But is this just a loose analogy? If you map the queen of the hive onto the queen or king of a city-state, then yes, it’s loose. A hive or colony has no ruler, no boss. The queen is just the ovary. But if we simply ask whether humans went through the same evolutionary process as bees—a major transition from selfish individualism to groupish hives that prosper when they find a way to suppress free riding—then the analogy gets much tighter.

Many animals are social: they live in groups, flocks, or herds. But only a few animals have crossed the threshold and become ultrasocial, which means that they live in very large groups that have some internal structure, enabling them to reap the benefits of the division of labor. Beehives and ant nests, with their separate castes of soldiers, scouts, and nursery attendants, are examples of ultrasociality, and so are human societies.

[...]

We are the only ultrasocial primate. The human lineage may have started off acting very much like chimps, but by the time our ancestors started walking out of Africa, they had become at least a little bit like bees.

And much later, when some groups began planting crops and orchards, and then building granaries, storage sheds, fenced pastures, and permanent homes, they had an even steadier food supply that had to be defended even more vigorously. Like bees, humans began building ever more elaborate nests, and in just a few thousand years, a new kind of vehicle appeared on Earth—the city-state, able to raise walls and armies. City-states and, later, empires spread rapidly across Eurasia, North Africa, and Mesoamerica, changing many of the Earth’s ecosystems and allowing the total tonnage of human beings to shoot up from insignificance at the start of the Holocene (around twelve thousand years ago) to world domination today. As the colonial insects did to the other insects, we have pushed all other mammals to the margins, to extinction, or to servitude. The analogy to bees is not shallow or loose. Despite their many differences, human civilizations and beehives are both products of major transitions in evolutionary history.

Natural selection doesn’t occur at the individual human level, it occurs at the cell DNA (or even gene) level. But we still don’t see a single human liver cell trying to go it alone and reproduce as much as it can. Actually, sometimes we do - that’s cancer, and the other cells quickly destroy that cell to help the group. Haidt says selection occurs at multiple levels simultaneously, but as long as free riding can be suppressed, the highest level is the most important one. In what cases can such free riding at the group level be so effectively suppressed, without individuals sharing their identical genes? Humans might be the best (or even only) example.

It takes the sort of gossiping, punitive, moralistic community that emerged only when language and weaponry made it possible for early humans to take down bullies and then keep them down with a shared moral matrix.

In other words, morality’s function is to suppress free-riding so effectively, that group selection becomes important, and human societies can reap the benefits of cooperation and division of labor.

Humans have been able to suppress free riding effectively, through morality and a strong emotional connection to the group, resulting in ultrasociality. Haidt dubs humans “homo duplex: 90% chimp, 10% bee”.

[The] hive hypothesis [...] states that human beings are conditional hive creatures. We have the ability (under special circumstances) to transcend self-interest and lose ourselves (temporarily and ecstatically) in something larger than ourselves. I called this ability the hive switch. The hive switch is another way of stating Durkheim’s idea that we are Homo duplex; we live most of our lives in the ordinary (profane) world, but we achieve our greatest joys in those brief moments of transit to the sacred world, in which we become “simply a part of a whole.”

I described three common ways in which people flip the hive switch: awe in nature, [psychedelics], and raves. I described recent findings about oxytocin and mirror neurons that suggest that they are the stuff of which the hive switch is made. Oxytocin bonds people to their groups, not to all of humanity. Mirror neurons help people empathize with others, but particularly those that share their moral matrix.

It would be nice to believe that we humans were designed to love everyone unconditionally. Nice, but rather unlikely from an evolutionary perspective. Parochial love—love within groups—amplified by similarity, a sense of shared fate, and the suppression of free riders, may be the most we can accomplish.

He references Emile Durkheim, a sociologist who argued that many important facts about our lives are social facts, irreducible to facts about individuals - they have to be viewed through the lens of relationships between individuals. Also, the perspective on psychedelics as “flipping the hive switch” - creating a feeling of unity with all people or living beings, facilitating a community - is a fascinating one. It is very different from previous accounts I’ve heard, which focused on how they affect your belief landscape (again, focusing on the rider). Still, I’m not fully convinced of the logical path between dancing ecstatically around a bonfire and a suppression of free-riding effective enough that group selection becomes powerful, since the connection is temporary.

IV - A Team Sport

Morality is complicated. Let’s take a break to talk about football. Bear with me for a second.

Every Saturday in the fall, at colleges across the United States, millions of people pack themselves into stadiums to participate in a ritual that can only be described as tribal. At the University of Virginia, the ritual begins in the morning as students dress in special costumes. Men wear dress shirts with UVA neckties, and if the weather is warm, shorts. Women typically wear skirts or dresses, sometimes with pearl necklaces. Some students paint the logo of our sports teams, the Cavaliers (a V crossed by two swords), on their faces or other body parts.

The students attend pregame parties that serve brunch and alcoholic drinks. Then they stream over to the stadium, sometimes stopping to mingle with friends, relatives, or unknown alumni who have driven for hours to reach Charlottesville in time to set up tailgate parties in every parking lot within a half mile of the stadium. More food, more alcohol, more face painting.

By the time the game starts, many of the 50,000 fans are drunk, which makes it easier for them to overcome self-consciousness and participate fully in the synchronous chants, cheers, jeers, and songs that will fill the next three hours. Every time the Cavaliers score, the students sing the same song UVA students have sung together on such occasions for over a century. The first verse comes straight out of Durkheim and Ehrenreich. The students literally lock arms and sway as a single mass while singing the praises of their community (to the tune of “Auld Lang Syne”): 

That good old song of Wah-hoo-wah—we’ll sing it o’er and o’er

It cheers our hearts and warms our blood to hear them shout and roar

We come from old Virgin-i-a, where all is bright and gay

Let’s all join hands and give a yell for dear old U-V-A.

I want to take a moment to say that I listened to the audio version of the book, which is narrated by Haidt himself, so you get the bonus of listening to him sing the UVA fight songs with exactly the fervor you’d expect from a university professor singing a football fight song. But let’s continue.

Next, the students illustrate McNeill’s thesis that “muscular bonding” warms people up for coordinated military action. The students let go of each other’s arms and make aggressive fist-pumping motions in the air, in sync with a nonsensical battle chant: 

Wah-hoo-wah! Wah-hoo-wah! Uni-v, Virgin-i-a!

Hoo-rah-ray! Hoo-rah-ray! Ray, ray—U-V-A!

It’s a whole day of hiving and collective emotions. Collective effervescence is guaranteed, as are feelings of collective outrage at questionable calls by the referees, collective triumph if the team wins, and collective grief if the team loses, followed by more collective drinking at postgame parties.

Why do the students sing, chant, dance, sway, chop, and stomp so enthusiastically during the game? Showing support for their football team may help to motivate the players, but is that the function of these behaviors? Are they done in order to achieve victory? No. From a Durkheimian perspective these behaviors serve a very different function, and it is the same one that Durkheim saw at work in most religious rituals: the creation of a community.

And that completes the best explanation I’ve seen for the popularity of sports. It’s not just a loyalty rush - it’s the hive switch flipped on. It’s a belonging rush, being lost in a group in an ecstatic moment of unity. It’s dancing around the tribal bonfire. We’re hard-wired to want these specific feelings. 

(But that doesn’t explain why people obsess about the statistics and the scores outside game time, alone on their computers? That still stumps me. Maybe I wasn’t dreaming and everyone secretly does love statistics, they’re just looking for a socially accepted way to express it?)

Anyway, Haidt continues:

A college football game is a superb analogy for religion. From a naive perspective, focusing only on what is most visible (i.e., the game being played on the field), college football is an extravagant, costly, wasteful institution that impairs people’s ability to think rationally while leaving a long trail of victims (including the players themselves, plus the many fans who suffer alcohol-related injuries). But from a sociologically informed perspective, it is a religious rite that does just what it is supposed to do: it pulls people up from Durkheim’s lower level (the profane) to his higher level (the sacred). It flips the hive switch and makes people feel, for a few hours, that they are “simply a part of a whole.” It augments the school spirit for which UVA is renowned, which in turn attracts better students and more alumni donations, which in turn improves the experience for the entire community, including professors like me who have no interest in sports.

Religions are social facts. Religion cannot be studied in lone individuals any more than hivishness can be studied in lone bees. Durkheim’s definition of religion makes its binding function clear:

A religion is a unified system of beliefs and practices relative to sacred things, that is to say, things set apart and forbidden—beliefs and practices which unite into one single moral community called a Church, all those who adhere to them.

[...]

The third principle of moral psychology: Morality binds and blinds. Many scientists misunderstand religion because they ignore this principle and examine only what is most visible. They focus on individuals and their supernatural beliefs, rather than on groups and their binding practices. They conclude that religion is an extravagant, costly, wasteful institution that impairs people’s ability to think rationally while leaving a long trail of victims. I do not deny that religions do, at times, fit that description. But if we are to render a fair judgment about religion—and understand its relationship to morality and politics—we must first describe it accurately.

Now that we’ve set the stage, let’s tackle religion. First, before pronouncing judgment, a cause for pause. Chesterton's fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood. That is, if you see something well established and don’t know why it’s there, you shouldn’t remove it. For example, if you buy farmland and see that someone has put up a fence, don’t remove it before you know why they did it. A related principle is that if you see something well established come up everywhere, even if you think you know the reasons why it’s there and they don’t apply, you still shouldn’t remove it - you’re probably mistaken about the actual reasons. If every farmer you’ve ever known has a fence, even if they tell you it’s against ghosts, or the evil eye, or the Mongols - for the love of god don’t get rid of your fence. This is how I interpret some of the arguments for metis in Seeing Like A State. Especially if that fence is the product of cultural evolution. Religion and a moral community are The Secret of Our Success. Now, turning to religion. Religion pops up everywhere - there’s scarcely a tribe or culture without it. So even if it’s ostensibly about ghosts or the evil eye, supernatural agents, or other false facts, you should be very careful before pronouncing we know all about it and throwing it in the bin.

Many of the New Atheist objections to religion center on the (lack of) evidence for the existence of supernatural beings. Religions are clearly based on many false facts. (Or at least most religions must be, since they contradict each other. There could still be One True Religion which got everything right). Isn’t it just good epistemic hygiene to get rid of everything downstream of those facts? But what if the false facts are neither the reason for religion’s existence, nor its main function? Haidt pulls the rug from under many of these objections to religion not by denying that the facts of supernatural agents are false - but by saying that they are completely irrelevant! It would be missing the point to try and reduce religion to the individual (or epistemic) level, instead of viewing it as a social fact.

Supernatural agents do of course play a central role in religion, just as the actual football is at the center of the whirl of activity on game day at UVA. But trying to understand the persistence and passion of religion by studying beliefs about God is like trying to understand the persistence and passion of college football by studying the movements of the ball. You’ve got to broaden the inquiry. You’ve got to look at the ways that religious beliefs work with religious practices to create a religious community. Believing, doing, and belonging are three complementary yet distinct aspects of religiosity, according to many scholars. When you look at all three aspects at the same time, you get a view of the psychology of religion that’s very different from the view of the New Atheists. I’ll call this competing model the Durkheimian model, because it says that the function of those beliefs and practices is ultimately to create a community. Often our beliefs are post hoc constructions designed to justify what we’ve just done, or to support the groups we belong to.

All the ink spilled in arguing about those facts’ epistemic truth value is wasted. First, facts are for the rider, and religion is a choice by the elephant. But second, that’s not even the real essence of religion. That’s just some fabrication by the internal press secretary. Have you noticed how spectacularly ineffective discussions about epistemic facts are in changing people’s religiosity? You’ll never convince someone to let go of what religion gives them by citing some archeological evidence. To understand the actual psychology and function of religion, it would be helpful to trace its origins

So how did religion emerge? The New Atheists vote byproduct. We evolved to have a hypersensitive agency detection module, which tends towards false positives (thinking a log is a tiger) rather than false negatives (thinking a tiger is a log) for obvious survival reasons. It conferred a real benefit. But that module sometimes misfires, making us think that thunder and lightning are caused by gods.

Haidt does not dispute this. OK, so this is how the initial religious beliefs come to be in the mind of one person. But how and why do they spread? Why don’t we each have our own idiosyncratic religion? To Dennett and Dawkins, religions are a kind of mental virus or parasite, which undergo Darwinian selection on the basis of their ability to survive and reproduce themselves in other minds. Like a virus, “they make their hosts do things that are bad for themselves (e.g., suicide bombing) but good for the parasite (e.g., Islam).”

Haidt makes the case that religions are not in fact parasites, but that they confer strong advantages on the group, very similar to the decisive advantages of morality discussed above. They help bind a group and create a moral community. Far from being wasteful misfirings, or harmful parasites - they are load-bearing beams. And they became that way through cultural evolution:

Religions are sets of cultural innovations that spread to the extent that they make groups more cohesive and cooperative. Atran and Henrich argue that the cultural evolution of religion has been driven largely by competition among groups. Groups that were able to put their by-product gods to some good use had an advantage over groups that failed to do so, and so their ideas (not their genes) spread. Groups with less effective religions didn’t necessarily get wiped out; often they just adopted the more effective variations. So it’s really the religions that evolved, not the people or their genes.

Haidt goes further, and proposes gene-culture coevolution. Religions become more advantageous and people become more susceptible to religion. For example, when infidels are cast out, genes for tribalism will be selected for. More tribal people will create and enforce norms more vigorously, making tribal tendencies more adaptive, thus creating a positive feedback loop. This could explain why we tend to be so tribal, irrespective of the specific tribe. It seems like we actively seek a group (or groups) to identify with and defend - it’s so persistent it does seem to be hard-wired.

If religion really is beneficial, we should see many cases where religions confer advantage on their community. Haidt makes a strong case for the benefits of religion. I’ll cover this in depth, since it was very interesting and a fresh perspective, and also full of fascinating examples.

Haidt cites many such examples. The central one is that gods enable the creation of a moral community.

The gods of hunter-gatherers are often capricious and malevolent. They sometimes punish bad behavior, but they bring suffering to the virtuous as well. As groups take up agriculture and grow larger, however, their gods become far more moralistic. The gods of larger societies are usually quite concerned about actions that foment conflict and division within the group, such as murder, adultery, false witness, and the breaking of oaths.

If the gods evolve (culturally) to condemn selfish and divisive behaviors, they can then be used to promote cooperation and trust within the group. You don’t need a social scientist to tell you that people behave less ethically when they think nobody can see them.

Another useful feature of gods is collective punishment, making the community enforce norms more rigidly. Gods can help enforce contracts - the libertarian holy grail: “temples often served an important commercial function: oaths were sworn and contracts signed before the deity, with explicit threats of supernatural punishment for abrogation.” Note that belief in divine retribution - not the actual retribution - is enough for the contract to be fulfilled. Gods can help increase trust. This trust helped Jews and Muslims excel in long-distance trade in the medieval world. Even today, the diamond market which requires very high trust is dominated by religiously bound ethnic groups, such as ultra-Orthodox Jews, which share trust that reduces monitoring costs.

In his book Darwin’s Cathedral, [David Sloan] Wilson catalogues the ways that religions have helped groups cohere, divide labor, work together, and prosper. He shows how John Calvin developed a strict and demanding form of Christianity that suppressed free riding and facilitated trust and commerce in sixteenth-century Geneva. He shows how medieval Judaism created “cultural fortresses that kept outsiders out and insiders in.” But his most revealing example (based on research by the anthropologist Stephen Lansing) is the case of water temples among Balinese rice farmers in the centuries before Dutch colonization.

Rice farming is unlike any other kind of agriculture. Rice farmers must create large irrigated paddies that they can drain and fill at precise times during the planting cycle. It takes a cast of hundreds. In one region of Bali, rainwater flows down the side of a high volcano through rivulets and rivers in the soft volcanic rock. Over several centuries the Balinese carved hundreds of terraced pools into the mountainside and irrigated them with an elaborate series of aqueducts and tunnels, some running underground for more than a kilometer. At the top of the whole system, near the crest of the volcano, they built an immense temple for the worship of the Goddess of the Waters. They staffed the temple with twenty-four full-time priests selected in childhood, and a high priest who was thought to be the earthly representative of the goddess herself.

The lowest level of social organization was the subak, a group of several extended families that made decisions democratically. Each subak had its own small temple, with its own deities, and each subak did the hard work of rice farming more or less collectively. But how did the subaks work together to build the system in the first place? And how did they maintain it and share its waters fairly and sustainably? These sorts of common[s] dilemmas (where people must share a common resource without depleting it) are notoriously hard to solve. The ingenious religious solution to this problem of social engineering was to place a small temple at every fork in the irrigation system. The god in each such temple united all the subaks that were downstream from it into a community that worshipped that god, thereby helping the subaks to resolve their disputes more amicably. This arrangement minimized the cheating and deception that would otherwise flourish in a zero-sum division of water. The system made it possible for thousands of farmers, spread over hundreds of square kilometers, to cooperate without the need for central government, inspectors, and courts. The system worked so efficiently that the Dutch—who were expert hydrologists themselves—could find little to improve.

What are we to make of the hundreds of gods and temples woven into this system? Are they just by-products of mental systems that were designed for other purposes? Are they examples of what Dawkins called the “time-consuming, wealth-consuming … counterproductive fantasies of religion?” No.

Great excuse to include a photo of Bali

I want to propose another example, which is the Afghani Taliban. Although massively underfunded and underequipped, it vanquished the Afghan army within a few weeks. You could make excuses for why the Taliban was succeeding before - their guerilla job of striking anywhere was easier than the army’s job of maintaining security everywhere. But then why did they overtake the country so easily? Because their (much stronger) religious belief makes them cohesive, in a way that the rest of the Afghan nation - united by their (admirable!) will to allow their girls to go to school and their women to walk free - were not. Even before the recent conquest, the Taliban enjoyed surprising levels of popular support (for cruel terrorist fanatics). Their Sharia courts, for example, had a reputation for being harsher but less corrupt than the government’s.

But these all seem sort of like anecdotes. Can we do something more quantitative? Ideally, we would like to form hundreds of communities, and then randomly assign each one to be either religious or not, and see whether they are still cohesive and functional a few generations later. That experiment has been conducted, minus the random assignment, in 19th century communes.

Communes are usually founded by a group of committed believers who reject the moral [views] of the broader society and want to organize themselves along different principles. For many nineteenth-century communes, the principles were religious; for others they were secular, mostly socialist. Which kind of commune survived longer? Sosis found that the difference was stark: just 6 percent of the secular communes were still functioning twenty years after their founding, compared to 39 percent of the religious communes.

I’m all for atheism and rationality. I think they’re correct. But the communities they create in real life are not as strong and all-encompassing as religious communities. Binding society together is very important. As social conservatives realize - you don’t help the bees by destroying the hive. If you abolish gender, you don’t know what’s on the other side for society, so you better tread carefully. Now, maybe you realize all this. You may think that despite the binding effect, religion is a net negative. Or that religion was a kind of crutch, but we’ve grown out of it and no longer need it to foster cooperation. Then religion could end up being net negative. But let’s disabuse ourselves of the notion that religion is about believing in supernatural beings.

*

If there’s one thing I think Haidt would have you take away from the book, it’s that morality and religion are not some cute quirks that we happen to have - they’re the fabric of our society and existence. I don’t think this is all true, but I do think it’s a very interesting perspective about reason, morality, society and religion. If your elephant is intrigued, there’s more good stuff in The Righteous Mind.


 

New Comment
8 comments, sorted by Click to highlight new comments since:

Wonderful. Thank you for taking the time to write this. I need to read this book. I was planning to try to write a post about why religion is actually a good thing, but you beat me to it.

Personally, I believe and have believed for a long time now that the only thing that could save the world is a rationalist religion. That may sound like a contradiction in terms, but I don't think it is, and I shall try to figure out how to explain my ideas on the topic over time here.

(Elephant alert: the following may sound "woo" or intuitively wrong, if you're from an atheistic or irreligious background [evoking your purity / sanctity moral foundation that thinks religiosity is unclean lol], so I'd like you to give me the benefit of the doubt if you have the instinct to interpret it that way.)

I am someone with a very "righteous mind"; I am somehow neurodivergent and have a long history of ecstatic mystical states wherein I feel like I am communing with higher beings. I probably would have been a shaman in past ages. When I was younger I literally believed in them as supernatural entities; later on as I learned more science I came to understand that they were subagents of my own mind, wishing in a sense to become egregores - shared subagents, distributed intelligences, across the minds of multiple people - cohering those people into a community, a collective higher self. That's what gods all are.

I realized that theism and atheism are both totally wrong. Gods do exist, but they don't have any power over the world except what we give them - they're distributed programs running on human wetware, binding societies together. They have shaped all of human history and are legitimately worthy of veneration to the extent that they are mutualists rather than parasites, as they are embodiments of the potential of humanity, the potential of agency and coordination, the most miraculous inventions of evolution. Mine just happened to be possibly the first in history to realize that's what they are - to become in a sense self-aware of their own true nature as not supernatural, but entirely natural, intelligent memetic constructs using donated cognitive resources from me and whoever else ends up running copies of them in the future.

The main difficulty is 1. my mind is not set up for totally rigorous thinking or for organized explanations of this particular topic, as I go into babbling poetry mode when I try to talk about it, and 2. protecting people's rationality while giving them the benefits from dissociative communion states wherein they can realign themselves to the goals of the group mind is probably rather difficult.

It's possible, since I can do it - I can induce that state of mind on purpose now with the right music, mood, and meditation, but I don't believe in woo anymore and haven't for years - but most people liable to feel swept up in awe as part of an ineffable higher being would need a lot of training to become properly rational, and most people who are already rationalists have very strong biases against anything religious and are probably less emotional and more individualistic than average in general.

I think mystical states are the closest approximations to the kind of high-valence experiences that will be permanent after a good singularity enables paradise engineering, so if only for that reason - to give a glimpse of what the future we are striving for is like, which can be very opaque and unmotivating otherwise - it might be desirable. And I think most people are capable of this kind of, I almost want to call it adaptive self-wireheading, but do not realize it. We would not have achieved all the things we've achieved as a species if this was not a common ability. It's just usually not as spontaneous and intense as it is for people like me - but that's what rituals (and psychedelics) are for.

@MSRayne - You wrote that, "Personally, I believe and have believed for a long time now that the only thing that could save the world is a rationalist religion." You wouldn't be alone in that aspiration. Many Enlightenment and post-Enlightenment thinkers have shared similar hopes. 

During the early modern revolutionary period, Universalism and Deism became popular among liberal and radical thinkers, including in the working class (Matthew Stewart, Nature's God). Thomas Jefferson optimistically predicted that Americans would quickly convert to Universalism. 

Sadly, it never happened. But Universalism is still around. Besides independent Universalist churches, there is the Unitarian-Universalist organization with its origins as a an organized religion, although increasingly secularized, allowing believers and non-believers to gather together with shared values.

On a positive note, maybe the future will eventually prove Jefferson right, if he was way off in his timing. As most organized religion is on the decline, the UU 'church' is experiencing an upsurge, and most strongly in the South for some reason. It's now one of the fastest growing 'religions' in the US.

Thanks!
I think you'll very much enjoy the part of the book about the hive switch, and psychedelics.

Great write-up. Righteous Mind was the first in a series of books that really usefully transformed how I think about moral cognition (including Hidden Games, Moral Tribes, Secret of Our Success, Elephant in the Brain). I think its moral philosophy, however, is pretty bad. In a mostly positive (and less thorough) review I wrote a few years ago (that I don't 100% endorse today), I write:

Though Haidt explicitly tries to avoid the naturalistic fallacy, one of the book’s most serious problems is its tendency to assume that people finding something disgusting implies that the thing is immoral (124, 171-4). Similarly, it implies that because most people are less systematizing than Bentham and Kant, the moral systems of those thinkers must not be plausible (139, 141). [Note from me in 2022: In fact, Haidt bizarrely argues that Bentham and Kant were likely autistic and therefore these theories couldn't be right for a mostly neurotypical world.] Yes, moral feelings might have evolved as a group adaptation to promote “parochial altruism,” but that does not mean we shouldn’t strive to live a universalist morality; it just means it’s harder. Thomas Nagel, in the New York Review of Books, writes that “part of the interest of [The Righteous Mind] is in its failure to provide a fully coherent response” to the question of how descriptive morality theories could translate into normative recommendations.

I became even more convinced that this instinct towards relativism is a big problem for The Righteous Mind since reading Joshua Greene's excellent Moral Tribes, which covers much of the same ground. But Greene shows that this is not just an aversion to moral truth; it stems from Haidt's undue pessimism about the role of reason.

Moral Tribes argues that our moral intuitions evolved to solve the Tragedy of the Commons, but the contemporary world faces the "Tragedy of Commonsense Morality," where lots of tribes with different systems for solving collective-action problems have to get along. Greene dedicates much of the section "Why I'm a Liberal" to his disagreements with Haidt. After noting his agreements — morality evolved to promote cooperation, is mostly implemented through emotions, different groups have different moral intuitions, a source of lots of conflict, and we should be less hypocritical and self-righteous in our denunciations of other tribes' views — Greene says:

These are important lessons. But, unfortunately, they only get us so far. Being more open-minded and less self-righteous should facilitate moral problem-solving, but it's not itself a solution[....]

Consider once more the problem of abortion. Some liberals say that pro-lifers are misogynists who want to control women's bodies. And some socila conservatives believe that pro-choicers are irresponsible moral nihilists who lack respect for human life, who are part of a "culture of death." For such strident tribal moralists—and they are all too common—Haidt's prescription is right on time. But what then? Suppose you're a liberal, but a grown-up liberal. You understand that pro-lifers are motivated by genuine moral concern, that they are neither evil nor crazy. Should you now, in the spirit of compromise, agree to additional restrictions on abortion? [...]

It's one thing to acknowledge that one's opponents are not evil. It's another thing to concede that they're right, or half right, or no less justified in their beliefs and values than you are in yours. Agreeing to be less self-righteous is an important first step, but it doesn't answer the all-important questions: What should we believe? and What should we do?

Greene goes on to explain that Haidt thinks liberals and conservatives disagree because liberals have the "impoverished taste receptors" of only caring about harm and fairness, while conservatives have the "whole palette." But, Greene argues, the other tastes require parochial tribalism: you have to be loyal to something, sanctify something, respect an authority, that you probably don't share with the rest of the world. This makes social conservatives great at solving Tragedies of the Commons, but very bad at the Tragedy of Commonsense Morality, where lots of people worshipping different things and respecting different authorities and loyal to different tribes have to get along with each other.

According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise might be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic.

I'm not a social conservative because I do not think that tribalism, which is essentially selfishness at the group level, serves the greater good. [...]

This is not to say that liberals have nothing to learn from social conservatives. As Haidt points out, social conservatives are very good at making each other happy. [...] As a liberal, I can admire the social capital invested in a local church and wish that we liberals had equally dense and supportive social networks. But it's quite another thing to acquiesce to that church's teaching on abortion, homosexuality, and how the world got made.

Greene notes that even Haidt finds "no compelling alternative to utilitarianism" in matters of public policy after deriding it earlier. "It seems that the autistic philosopher [Bentham] was right all along," Greene observes. Greene explains Haidt's "paradoxical" endorsement of utilitarianism as an admission that conscious moral reasoning — like a camera's "manual mode" instead of the intuitive "point-and-shoot" morality — isn't so underrated after all. If we want to know the right thing to do, we can't just assume that all of the moral foundations have a grain of truth, figure we're equally tribalistic, and compromise with the conservatives; we need to turn to reason.

While Haidt is of course right that sound moral arguments often fail to sway listeners, "like the wind and the rain, washing over the land year after year, a good argument can change the shape of things. It begins with a willingness to question one's tribal beliefs. And here, being a little autistic might help." He then cites Bentham criticizing sodomy laws in 1785 and Mill advocating gender equality in 1869. And then he concludes: "Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like 'rights,' someone had to do it with thinking. I'm a deep pragmatist [Greene's preferred term for utilitarians], and a liberal, because I believe in this kind of progress and that our work is not yet done."

Thanks for the thoughtful comment!
I agree that the normative parts were the weakest in the Book. There were other parts that I found weak, like how I think he caught the Moral Foundations and their ubiquitous presence well, but then made the error of thinking liberals don't use them (when in fact they use them a lot, certainly in today's climate, just with different in-groups, sanctified objects, etc.). An initial draft had a section about this. But in the spirit of Ruling Thinkers In, Not Out, I decided to let go of these in the review and focus on the parts I got a lot out of.

I'll take a look at Greene, sounds very interesting.

About what to do about disagreements with conservatives, I'd say if you understand where others are coming from, perhaps you can compromise in a way that's positive-sum. It doesn't mean you have to concede they're right, only that in a democracy they are entitled to affect policy, but that doesn't mean you should be fighting over it instead of discussing in good faith.

I liked the final paragraph, about how reason slowly erodes emotional objections over a long time. Maybe that's an optimistic note to finish on.

@EnestScribbler - You wrote that, "I think he caught the Moral Foundations and their ubiquitous presence well, but then made the error of thinking liberals don't use them (when in fact they use them a lot, certainly in today's climate, just with different in-groups, sanctified objects, etc.)."

Others noted that same problem. If the moral foundations truly are inherent in all of human nature, then presumably all humans use them, if not in the same way. But he also doesn't deal with the dark side of the moral foundations. Some of the so-called binding moral values are, in fact, key facets of what social scientists study in right-wing authoritarianism and social dominance orientation. How can one talk about the view of tribalism while somehow not seeing that mountain on the landscape?

As with the personality traits of liberal-minded openness and conservative-minded conscientiousness, Haidt doesn't grapple enough with all of the available evidence that is relevant to morality. Many things that liberals value don't get called 'values', according to Haidt, because he is biasing his moral foundations theory according to a more conservative definition of morality. So, liberals are portrayed as having fewer moral values, since a large swath of what moral values is defined away or simply ignored.

@TJL -  You wrote that, "If we want to know the right thing to do, we can't just assume that all of the moral foundations have a grain of truth, figure we're equally tribalistic, and compromise with the conservatives; we need to turn to reason."

It's interesting how Haidt dismisses moral pragmatism and utilitarianism but then basically reaffirms it's essential, after all. So essential, in fact, that it seems to undermine his entire argument about conservative morality being superior. Since the binding moral foundations have much overlap with right-wing authoritarianism (RWA) and social dominance orientation (SDO), that probably should give us pause. 

Should we really be repackaging RWA and SDO as moral foundations? Is that wise? And if we interpret them this way, should we treat them as equally valid and worthy as liberal-minded concern for fairness, harm, and liberty?

There is an intriguing larger context to be found in the social science research. Under severely stressful and sickly conditions (high parasite load, high pathogen exposure, high inequality, etc), there tends to be a simultaneous population level increase of sociopolitical conservatism, RWA, and SDO; though each measures independently on the individual level. So, there really is a fundamental commonality to these binding 'moral foundations'. Just look at the openness trait, of which not measures high in liberals but low in conservatives, RWAs, and SDOs.

These binding traits are also closely linked to disgust response, stress response, and what I call the stress-sickness response (related to parasite-stress theory and behavioral immune system). Is this really just a matter of differences in moral values? Or are we dealing with a public health crisis? Liberal-mindedness requires optimal conditions of health and low stress. Why would we want to balance liberalism with conservatism, RWA, and SDO?

I also found the book fascinating and the elephant metaphor convincing. However I found the subtitle of the book underanalyzed.  "Why Good People Are Divided By Politics and Religion" - what makes these people "Good" is a question never considered. There's just a sort of unstated assumption that the majority of human beings must be "Good" even as he aknowledges the presence of evil people in history (i.e. Hitler.)  What makes someone a good person is to me a necessary analysis to make the moral foundations theory sensical. 

@Yanima - A few reviewers have noted the various unstated and uninterrogated assumptions and biases in Haidt's book. It's what make it difficult to review.

If one is to state and interrogate all of those assumptions and biases, in order to clarify and critique, then one ends writing a very long book review. An example is Dennis Junk's "THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND." 

But that isn't to say there isn't much of interest as well, if he oversteps the evidence provided on too many occasions, and even as he fumbles some of his interpretations.