Wonderful. Thank you for taking the time to write this. I need to read this book. I was planning to try to write a post about why religion is actually a good thing, but you beat me to it.
Personally, I believe and have believed for a long time now that the only thing that could save the world is a rationalist religion. That may sound like a contradiction in terms, but I don't think it is, and I shall try to figure out how to explain my ideas on the topic over time here.
(Elephant alert: the following may sound "woo" or intuitively wrong, if you're from an atheistic or irreligious background [evoking your purity / sanctity moral foundation that thinks religiosity is unclean lol], so I'd like you to give me the benefit of the doubt if you have the instinct to interpret it that way.)
I am someone with a very "righteous mind"; I am somehow neurodivergent and have a long history of ecstatic mystical states wherein I feel like I am communing with higher beings. I probably would have been a shaman in past ages. When I was younger I literally believed in them as supernatural entities; later on as I learned more science I came to understand that they were subagents of my own mind, wishing in a sense to become egregores - shared subagents, distributed intelligences, across the minds of multiple people - cohering those people into a community, a collective higher self. That's what gods all are.
I realized that theism and atheism are both totally wrong. Gods do exist, but they don't have any power over the world except what we give them - they're distributed programs running on human wetware, binding societies together. They have shaped all of human history and are legitimately worthy of veneration to the extent that they are mutualists rather than parasites, as they are embodiments of the potential of humanity, the potential of agency and coordination, the most miraculous inventions of evolution. Mine just happened to be possibly the first in history to realize that's what they are - to become in a sense self-aware of their own true nature as not supernatural, but entirely natural, intelligent memetic constructs using donated cognitive resources from me and whoever else ends up running copies of them in the future.
The main difficulty is 1. my mind is not set up for totally rigorous thinking or for organized explanations of this particular topic, as I go into babbling poetry mode when I try to talk about it, and 2. protecting people's rationality while giving them the benefits from dissociative communion states wherein they can realign themselves to the goals of the group mind is probably rather difficult.
It's possible, since I can do it - I can induce that state of mind on purpose now with the right music, mood, and meditation, but I don't believe in woo anymore and haven't for years - but most people liable to feel swept up in awe as part of an ineffable higher being would need a lot of training to become properly rational, and most people who are already rationalists have very strong biases against anything religious and are probably less emotional and more individualistic than average in general.
I think mystical states are the closest approximations to the kind of high-valence experiences that will be permanent after a good singularity enables paradise engineering, so if only for that reason - to give a glimpse of what the future we are striving for is like, which can be very opaque and unmotivating otherwise - it might be desirable. And I think most people are capable of this kind of, I almost want to call it adaptive self-wireheading, but do not realize it. We would not have achieved all the things we've achieved as a species if this was not a common ability. It's just usually not as spontaneous and intense as it is for people like me - but that's what rituals (and psychedelics) are for.
@MSRayne - You wrote that, "Personally, I believe and have believed for a long time now that the only thing that could save the world is a rationalist religion." You wouldn't be alone in that aspiration. Many Enlightenment and post-Enlightenment thinkers have shared similar hopes.
During the early modern revolutionary period, Universalism and Deism became popular among liberal and radical thinkers, including in the working class (Matthew Stewart, Nature's God). Thomas Jefferson optimistically predicted that Americans would quickly convert to Universalism.
Sadly, it never happened. But Universalism is still around. Besides independent Universalist churches, there is the Unitarian-Universalist organization with its origins as a an organized religion, although increasingly secularized, allowing believers and non-believers to gather together with shared values.
On a positive note, maybe the future will eventually prove Jefferson right, if he was way off in his timing. As most organized religion is on the decline, the UU 'church' is experiencing an upsurge, and most strongly in the South for some reason. It's now one of the fastest growing 'religions' in the US.
Thanks!
I think you'll very much enjoy the part of the book about the hive switch, and psychedelics.
Great write-up. Righteous Mind was the first in a series of books that really usefully transformed how I think about moral cognition (including Hidden Games, Moral Tribes, Secret of Our Success, Elephant in the Brain). I think its moral philosophy, however, is pretty bad. In a mostly positive (and less thorough) review I wrote a few years ago (that I don't 100% endorse today), I write:
Though Haidt explicitly tries to avoid the naturalistic fallacy, one of the book’s most serious problems is its tendency to assume that people finding something disgusting implies that the thing is immoral (124, 171-4). Similarly, it implies that because most people are less systematizing than Bentham and Kant, the moral systems of those thinkers must not be plausible (139, 141). [Note from me in 2022: In fact, Haidt bizarrely argues that Bentham and Kant were likely autistic and therefore these theories couldn't be right for a mostly neurotypical world.] Yes, moral feelings might have evolved as a group adaptation to promote “parochial altruism,” but that does not mean we shouldn’t strive to live a universalist morality; it just means it’s harder. Thomas Nagel, in the New York Review of Books, writes that “part of the interest of [The Righteous Mind] is in its failure to provide a fully coherent response” to the question of how descriptive morality theories could translate into normative recommendations.
I became even more convinced that this instinct towards relativism is a big problem for The Righteous Mind since reading Joshua Greene's excellent Moral Tribes, which covers much of the same ground. But Greene shows that this is not just an aversion to moral truth; it stems from Haidt's undue pessimism about the role of reason.
Moral Tribes argues that our moral intuitions evolved to solve the Tragedy of the Commons, but the contemporary world faces the "Tragedy of Commonsense Morality," where lots of tribes with different systems for solving collective-action problems have to get along. Greene dedicates much of the section "Why I'm a Liberal" to his disagreements with Haidt. After noting his agreements — morality evolved to promote cooperation, is mostly implemented through emotions, different groups have different moral intuitions, a source of lots of conflict, and we should be less hypocritical and self-righteous in our denunciations of other tribes' views — Greene says:
These are important lessons. But, unfortunately, they only get us so far. Being more open-minded and less self-righteous should facilitate moral problem-solving, but it's not itself a solution[....]
Consider once more the problem of abortion. Some liberals say that pro-lifers are misogynists who want to control women's bodies. And some socila conservatives believe that pro-choicers are irresponsible moral nihilists who lack respect for human life, who are part of a "culture of death." For such strident tribal moralists—and they are all too common—Haidt's prescription is right on time. But what then? Suppose you're a liberal, but a grown-up liberal. You understand that pro-lifers are motivated by genuine moral concern, that they are neither evil nor crazy. Should you now, in the spirit of compromise, agree to additional restrictions on abortion? [...]
It's one thing to acknowledge that one's opponents are not evil. It's another thing to concede that they're right, or half right, or no less justified in their beliefs and values than you are in yours. Agreeing to be less self-righteous is an important first step, but it doesn't answer the all-important questions: What should we believe? and What should we do?
Greene goes on to explain that Haidt thinks liberals and conservatives disagree because liberals have the "impoverished taste receptors" of only caring about harm and fairness, while conservatives have the "whole palette." But, Greene argues, the other tastes require parochial tribalism: you have to be loyal to something, sanctify something, respect an authority, that you probably don't share with the rest of the world. This makes social conservatives great at solving Tragedies of the Commons, but very bad at the Tragedy of Commonsense Morality, where lots of people worshipping different things and respecting different authorities and loyal to different tribes have to get along with each other.
According to Haidt, liberals should be more open to compromise with social conservatives. I disagree. In the short term, compromise might be necessary, but in the long term, our strategy should not be to compromise with tribal moralists, but rather to persuade them to be less tribalistic.
I'm not a social conservative because I do not think that tribalism, which is essentially selfishness at the group level, serves the greater good. [...]
This is not to say that liberals have nothing to learn from social conservatives. As Haidt points out, social conservatives are very good at making each other happy. [...] As a liberal, I can admire the social capital invested in a local church and wish that we liberals had equally dense and supportive social networks. But it's quite another thing to acquiesce to that church's teaching on abortion, homosexuality, and how the world got made.
Greene notes that even Haidt finds "no compelling alternative to utilitarianism" in matters of public policy after deriding it earlier. "It seems that the autistic philosopher [Bentham] was right all along," Greene observes. Greene explains Haidt's "paradoxical" endorsement of utilitarianism as an admission that conscious moral reasoning — like a camera's "manual mode" instead of the intuitive "point-and-shoot" morality — isn't so underrated after all. If we want to know the right thing to do, we can't just assume that all of the moral foundations have a grain of truth, figure we're equally tribalistic, and compromise with the conservatives; we need to turn to reason.
While Haidt is of course right that sound moral arguments often fail to sway listeners, "like the wind and the rain, washing over the land year after year, a good argument can change the shape of things. It begins with a willingness to question one's tribal beliefs. And here, being a little autistic might help." He then cites Bentham criticizing sodomy laws in 1785 and Mill advocating gender equality in 1869. And then he concludes: "Today we, some of us, defend the rights of gays and women with great conviction. But before we could do it with feeling, before our feelings felt like 'rights,' someone had to do it with thinking. I'm a deep pragmatist [Greene's preferred term for utilitarians], and a liberal, because I believe in this kind of progress and that our work is not yet done."
Thanks for the thoughtful comment!
I agree that the normative parts were the weakest in the Book. There were other parts that I found weak, like how I think he caught the Moral Foundations and their ubiquitous presence well, but then made the error of thinking liberals don't use them (when in fact they use them a lot, certainly in today's climate, just with different in-groups, sanctified objects, etc.). An initial draft had a section about this. But in the spirit of Ruling Thinkers In, Not Out, I decided to let go of these in the review and focus on the parts I got a lot out of.
I'll take a look at Greene, sounds very interesting.
About what to do about disagreements with conservatives, I'd say if you understand where others are coming from, perhaps you can compromise in a way that's positive-sum. It doesn't mean you have to concede they're right, only that in a democracy they are entitled to affect policy, but that doesn't mean you should be fighting over it instead of discussing in good faith.
I liked the final paragraph, about how reason slowly erodes emotional objections over a long time. Maybe that's an optimistic note to finish on.
@EnestScribbler - You wrote that, "I think he caught the Moral Foundations and their ubiquitous presence well, but then made the error of thinking liberals don't use them (when in fact they use them a lot, certainly in today's climate, just with different in-groups, sanctified objects, etc.)."
Others noted that same problem. If the moral foundations truly are inherent in all of human nature, then presumably all humans use them, if not in the same way. But he also doesn't deal with the dark side of the moral foundations. Some of the so-called binding moral values are, in fact, key facets of what social scientists study in right-wing authoritarianism and social dominance orientation. How can one talk about the view of tribalism while somehow not seeing that mountain on the landscape?
As with the personality traits of liberal-minded openness and conservative-minded conscientiousness, Haidt doesn't grapple enough with all of the available evidence that is relevant to morality. Many things that liberals value don't get called 'values', according to Haidt, because he is biasing his moral foundations theory according to a more conservative definition of morality. So, liberals are portrayed as having fewer moral values, since a large swath of what moral values is defined away or simply ignored.
@TJL - You wrote that, "If we want to know the right thing to do, we can't just assume that all of the moral foundations have a grain of truth, figure we're equally tribalistic, and compromise with the conservatives; we need to turn to reason."
It's interesting how Haidt dismisses moral pragmatism and utilitarianism but then basically reaffirms it's essential, after all. So essential, in fact, that it seems to undermine his entire argument about conservative morality being superior. Since the binding moral foundations have much overlap with right-wing authoritarianism (RWA) and social dominance orientation (SDO), that probably should give us pause.
Should we really be repackaging RWA and SDO as moral foundations? Is that wise? And if we interpret them this way, should we treat them as equally valid and worthy as liberal-minded concern for fairness, harm, and liberty?
There is an intriguing larger context to be found in the social science research. Under severely stressful and sickly conditions (high parasite load, high pathogen exposure, high inequality, etc), there tends to be a simultaneous population level increase of sociopolitical conservatism, RWA, and SDO; though each measures independently on the individual level. So, there really is a fundamental commonality to these binding 'moral foundations'. Just look at the openness trait, of which not measures high in liberals but low in conservatives, RWAs, and SDOs.
These binding traits are also closely linked to disgust response, stress response, and what I call the stress-sickness response (related to parasite-stress theory and behavioral immune system). Is this really just a matter of differences in moral values? Or are we dealing with a public health crisis? Liberal-mindedness requires optimal conditions of health and low stress. Why would we want to balance liberalism with conservatism, RWA, and SDO?
I also found the book fascinating and the elephant metaphor convincing. However I found the subtitle of the book underanalyzed. "Why Good People Are Divided By Politics and Religion" - what makes these people "Good" is a question never considered. There's just a sort of unstated assumption that the majority of human beings must be "Good" even as he aknowledges the presence of evil people in history (i.e. Hitler.) What makes someone a good person is to me a necessary analysis to make the moral foundations theory sensical.
@Yanima - A few reviewers have noted the various unstated and uninterrogated assumptions and biases in Haidt's book. It's what make it difficult to review.
If one is to state and interrogate all of those assumptions and biases, in order to clarify and critique, then one ends writing a very long book review. An example is Dennis Junk's "THE ENLIGHTENED HYPOCRISY OF JONATHAN HAIDT'S RIGHTEOUS MIND."
But that isn't to say there isn't much of interest as well, if he oversteps the evidence provided on too many occasions, and even as he fumbles some of his interpretations.
I - Intuitions And Reason
They say cognitive biases are what other people have. In that vein, “The Righteous Mind” by Jonathan Haidt is a book about how other people think. Specifically, how they think about morality and make moral judgments and choices. But it also covers the interplay between intuitions and reason, community building, group selection, the importance of culture and religion, eating dog carcasses, and much more.
The first part of “The Righteous Mind” could hold its own as a book about thinking and rationality, but it’s disguised as being about moral psychology. It’s about how people actually form opinions and make choices, in a way that is very different from the ideal Bayesian decision maker. It centers on moral psychology, but applies just as well to almost any issue people care deeply about: politics, religion, group affiliation, etc. For me, it provided a fresh viewpoint on rationality, which has stuck with me ever since.
Like any book about rationality - it needs a central catchy metaphor for how the brain works. Here this metaphor is that of a rider on an elephant. Imagine a man riding on the back of an elephant. The rider can say “turn right!” but if the elephant wants to go left, that’s where they’re going. All the rider can do is say “Yes, we meant to go left. Left is the best choice. Here’s why.” The rider is conscious reasoning, and the elephant is emotions, intuitions and everything else we aren’t even always aware of. Emotions and intuitions make the decision where to turn almost instantaneously, and our conscious reasoning has to go along for the ride.
I feel like this captures incredibly well my - sorry, other people’s - thought process on many topics. Suppose I read an article which has a reasoned argument. Right at the start, picking up on subtle cues about the opinion expressed, but also on the various signals the language and tone of the article emit about the writer’s tribe and other views, the elephant (my unconscious emotions) leans towards either “accept” or “reject”. Accordingly, the rider (conscious reasoning) either tries to believe and support the arguments, or fight them internally with everything he’s got. If I want to accept, I ask “can I believe it”; to reject, I ask “must I believe it”. Sometimes, when I’m especially intellectually honest, I can concede an argument and say “this argument is true, but I’m still not swayed about the general issue.” Whew, attack parried.
We know this as confirmation bias. But is it really only about confirmation of existing opinions? I seem to have this reaction to a great many issues, on which I did not previously have an opinion. And how is the initial opinion which is later confirmed even formed in the first place - is it just randomly drawn when we first hear of an issue? Clearly not, since you can predict someone’s views on an unseen issue from their views on other issues.
This metaphor gives a much neater explanation. It’s not that whichever view exists is confirmed - this is just a symptom. Rather, whichever view has the intuitive, emotional appeal to someone - that view will be supported and confirmed with reasoning. This might look like confirmation of an existing opinion, but only because this process has run before, the first time the person was exposed to the issue, and resulted in the same outcome - support for the opinion the elephant preferred. So it seems like the existing opinion is confirmed, when instead what is confirmed each time is the elephant’s choice, the emotional valence. It’s motivated reasoning, not confirmation bias.
In other words, it’s not that we’re Bayesian reasoners with this overlaid quirk that we seek information that confirms our prior opinions, and so are stuck feeding more and more confirmatory evidence into our perfect Bayesian machine and deepening our certainty. It’s that we don’t have a Bayesian machine at all! We form the opinions some other way, and then only use reason to justify them to others.
If catchy metaphors are not up to your usual standard of evidence, perhaps I can interest you in some academic studies. In one, researchers had subjects make moral judgments under a cognitive load, such as memorizing the number 749204 (compared to memorizing only 7). That increase in cognitive load did not change subjects’ judgements. Another manipulation - making subjects wait for ten seconds before giving their judgment - did not change the judgments either. This, according to Haidt, means
The most brilliant experiment in my view used hypnosis. The experimenter, Thalia Wheatley, used vignettes which portrayed moral dilemmas and had subjects rate how morally wrong they were. But with a twist.
Obviously a feeling of disgust doesn’t change the utilitarian moral calculus of a situation, so the fact that the judgments change is strong evidence that this is not how people make their judgments, even if this is how they later justify them.
This study was followed by many other studies which we’ve come to expect from social psychology - studies that let people make moral judgments while within sniffing distance of fart spray, or after drinking a bitter drink (compared to a sweet drink), or while being in a dirty room. They found similar results: feelings of disgust make moral judgments more severe. Also as we’ve come to expect from social psychology mentioned in a book from 2012, these results do not replicate.
In fact, even the hypnosis study is underpowered (64 subjects) and many of the effects are insignificant. As far as I can tell, there has been no attempt to replicate it so far, but I do wish someone would try to repeat it with a larger sample. I have to admit there’s a good chance it won’t hold up either. So maybe it’s not as simple as moral judgments just being a feeling of disgust that we can easily trigger using an external stimulus. I’ll be honest though - I’m pretty convinced that it’s really about intuitions and not conscious reasoning, whether these specific studies find these results or not. My elephant has spoken.
The rider being powerless to change the actual decision, he chooses to give good justifications for it. The rider is like the White House press secretary. She can give justifications for decisions, but she is not the one making the decisions. You’ll never hear the White House press secretary answer in a briefing “that’s a good argument, we haven’t thought of that, and we’ll reconsider the policy” - it’s not within her purview. Here is how people reacted to Dan of the student council in the hypnosis experiment.
In fact, Haidt claims conscious reasoning did not evolve to form true opinions, but rather to justify to others whichever opinions were beneficial to their holder:
I have had this experience multiple times, for example in arguments with my spouse. I come up with all these very strong arguments, but then when the fight subsides they don’t seem as compelling, and sometimes are clearly flawed. My opinion was not determined by the arguments - the arguments were determined by the opinion. They are not reasons, but post-hoc fabrications. And we know the mind is very good at creating those, because of those split-brain confabulation experiments where people make up reasons for their actions because the hemisphere supplying the reasons was detached from the hemisphere exposed to the actual reason and making the decision. A personal takeaway is that explicitly asking people for the reasons why they believe something is pretty useless - they would just give their post-hoc fabrications (or create them on the spot especially for me!). Another personal takeaway is that fighting with my spouse isn’t fun and I should really do it less (and if I do - rational arguments are probably not the way to go). But this also explains some of the frustration in trying to have rational discussions with people on these topics.
We’ve all had this experience of trying to convince someone to change their mind, and them just not listening to reason. What actually happened is that we were aiming our arguments at the rider, who is not the one making the choice - only the one giving the reasoned justifications. I have a personal rule when I try to convince someone and refute their arguments. If at some point their arguments become really bad - I know I’ve lost. Because then I know it’s not really the arguments that determined their opinion. It’s something else that my arguments cannot reach. As the saying goes, “you can’t talk someone out of something they haven’t been talked into.”
(Similarly, psychotherapy that doesn’t work is wasting its time talking to the rider about things that the elephant decides. Non-Violent Communication gets this, and encourages people to talk only about their feelings and needs, not their rational reasons.)
Where does that leave reasoned debate? If the arguments aren’t really what convinces people, what other avenues are there? By this view, if we really want to persuade someone, we should definitely NOT say “Look at this table, the benefits of the policy exceed the costs, you should support.” Instead, we should whisper a message to the elephant: “Many people support this policy. The cool kids / popular people / your Team / your Tribe support it. You would gain their favor by supporting it. I’m your friend, you can trust me, and I support it.” Or whatever other signals the elephant picks up on, which are often non-verbal and non-explicit. Advertising in fact works very much like this. It’s never “Coca Cola - because the tastiness is worth the health costs!” It’s always a visual image which makes you implicitly associate Coca Cola with being cool / beautiful / exciting. Not always - sometimes they do advertise a laundry detergent by saying it costs less and cleans better. But even that is very rarely done using text on a blank background - it always tries to be emotionally appealing. And if that’s how we make relatively emotionally neutral choices like laundry detergent, think how much worse it must be with politics / religion / morality.
This is also true, I feel, when persuading not other people but myself. I think reason is in charge, but performing some things that are clearly rationally for my benefit proves very hard. Sometimes, exerting great effort, I can ignore the elephant or make it do what the rider thinks is right against its wishes. But it’s a great effort indeed, I never enjoy it, and can’t do it very often without making my life full of suffering. And it’s because I’m still under the illusion that the rider is in control.
Now, the elephant is not bad at all. It lets you make split-second decisions a thousand times a day, and they’re by and large very good. If I had to do the math every time I bought a laundry detergent I’d starve in the supermarket. Heuristics are essential. It’s only with specific large choices which are not intuitive that we might need conscious reasoning.
I do have to say I think Haidt goes a little too far. If reason is just for showing our group is right, why do we need it at all? Can't we say "I'm loyal to the group, irrespective of arguments"? Why play this game where we try to convince others we're objectively right, not just having different preferences? Also, to convince someone to support our coalition we need to reason to know how this can be portrayed as good for them - if we just used it to support our priors we would fail. And if we really only used reason to justify our priors, how come it has been able to advance science and we aren’t stuck in the Dark Ages? It sounds like an argument which proves too much - that people never change their mind on anything morally or politically or socially charged.
So what is Haidt’s response to this? He is very skeptical of individual rationality. He goes as far as to call it a “delusion”. But he doesn’t give up on reasoning entirely. He has an interesting take, where rationality can emerge from a group of reasoners:
This is at least an interesting idea, that rationality emerges from a group of biased reasoners. His description of the group sounds suspiciously like academia, which does in fact (in most fields) advance towards the truth. But it needs an extra ingredient. Why would someone’s reputation depend on them making the correct reasoning? Why wouldn’t it depend on them supporting the conclusions which affirm the group dogma (as happened and happens in many places, including some branches of academia)? Good reasoning needs to be central to the group. You need the identity of the group to be about good reasoning. You need to make rationality, and being correct, high status - status is something the elephant cares about. In communities and groups where this happened (like science, or forecasting) indeed reason can do wonderful things.
II - Moral Foundations Theory
In a sense all of this is not too surprising. I’ve always known that other people don’t make their moral judgments using maximum expected utility, otherwise moral arguments would sound very different than they do. Surely people have some heuristics for deciding moral questions. But I never took the extra step of thinking what they might be. Haidt makes a valiant effort to try and reconstruct these heuristics people use, the so-called Moral Foundations. They are clusters of moral intuitions. If the name wasn’t already taken, it should have been called “the theory of moral sentiments.” Haidt claims the same foundations are found to varying extents in almost any culture, because the intuitions have a strong innate component, which socialization can help aim in its desired direction.
Are these heuristics even interesting? Aren’t they an arbitrary collection of culturally-relative conventions? I used to think, like WVO Quine, that we learn concepts purely from generalizing their repeated usage. And “wrong” is something children learn to apply to situations after hearing it many times from adults and other kids. But the secret sauce is in the generalizing - there are innate (“organized in advance of experience”) mental modules for the Moral Foundations, and we learn to apply the word “wrong” to some or all of them. There are studies that show children make a distinction between things which are conditionally and unconditionally wrong. They recognize that coming to school without a school uniform is wrong, but would say it’s not wrong if a teacher allowed it, or if that school doesn’t require a uniform. However, for something like pushing a girl off a swing, they would say it’s wrong, universally wrong (even if that school has no rule against pushing children off swings), and unconditionally wrong (even if a teacher allowed it). So they have these notions or modules for things which feel wrong, and society interfaces with these modules to produce the specific morality of that culture. For example, all children could have the cognitive module to intuit that causing pain is wrong. Their culture can then build on it and teach children that slaughtering sheep is ok, but kicking a baby is wrong. No child would make the mistake of thinking that slapping a baby is ok (“it’s not kicking!”), because their innate mental module recognizes that the important attribute here is the pain of the baby, not the limb used to cause it.
So now for the big reveal - what are these Moral Foundations? Let’s go over them one by one with examples. Keep in mind that the examples don’t have to be morally wrong according to your final judgment, some just have to feel intuitively wrong, even for an instant. You can think about it as the things that film-makers make sure to show you about a character in the first minute after it appears - so you know they’re the villain, and not to be sympathized with. They could harm someone innocent, or betray their friends or family, etc.
Also keep in mind that there could be a utilitarian justification for some of these foundations, but intuitions come first, even if expected utility and game theory are the reasons these intuitions evolved, which we’ll get into later.
Care / harm
We should care for other people and not harm them, especially the weak. This one is the most intuitive to many people. If an action harms people (sometimes also animals), it is morally wrong. If a group of people gang up on a woman, and beat her unconscious for the fun of it - that’s wrong. They are violating the care foundation. Helping a blind person cross the street is morally good. As we said, many people seem to think care / harm is the only admissible foundation for something being morally right or wrong. Now, harm is not a synonym for “lowered total expected utility”. This foundation is triggered especially by direct bodily harm, a thing which we have intuitions about. There are films where the hero is a bank robber, but not where the hero is a child abuser. Probably because bank robbery is an indirect harm. Similarly, we have to force ourselves to realize that tax evasion causes more damage than car theft.
For you, Care / harm (and its logical extrapolation) might be the only moral foundation. But remember, this is a book about how other people think. To get at their moral foundations, Haidt carefully created a few vignettes which seem morally wrong, but where there is no harm to anyone. Here are four of them.
They tested these vignettes on US college students. Many subjects in fact said these were morally wrong. But when asking subjects to justify their judgments, they found something interesting. Subjects have several moral foundations, but when they need to justify their moral judgments, they seem to think only two of them are admissible - harm, and (less commonly) fairness. When asked to justify their judgments, they didn’t say “it’s just wrong - it’s degrading and disgusting and that’s the reason” but tried to invent victims in elaborate ways.
This is in contrast to people in less WEIRD cultures, or even working class subjects in the US, who were more comfortable condemning an action as morally wrong without a harm justification, or even without any reasoned justification at all.
Fairness / cheating
Abide by the rules, don’t cheat. Someone who says he is collecting donations for a charity, but at the end of the day takes it to himself is violating the Fairness foundation. A runner taking a shortcut on the course during the marathon in order to win is violating the Fairness foundation. Even if it’s an amateur race and nothing is at stake, and even if the runner would benefit more from the victory than the others. It’s just wrong to cheat.
You can ask where these rules that you should abide by come from, and there could be disagreement which makes people judge fairness differently. For example, if you’re against the concept of individual property, maybe theft is not a fairness violation. But regardless, if something is perceived to break the agreed-upon rules, it is judged as wrong.
Loyalty / betrayal
Be loyal to your in-group, don’t betray the group or its people. A man leaving his family business to go work for their main competitor violates the loyalty foundation. So is a man secretly voting against his wife in a local beauty pageant. To really trigger this one, you have to find your in-group. For example, I think of how betrayed and angry I felt when doctors or scientists came out against COVID vaccines or denied COVID was dangerous, using pseudo-scientific arguments and bad statistics but mostly their status, thus giving fodder to crazy conspiracy theorists and anti-vaxxers. For you, this could be women who disavow feminism, for example.
As a side note, Haidt gives the best explanation I’ve ever heard for the popularity of sports.
Being a sports fan feels good because it’s a loyalty rush. That’s why people want to be in the stadium, even if it’s cold and they actually see the plays in a way that’s inferior to their TV at home. It’s not about the plays, it’s about being a fan with others. More on this later.
Authority / subversion
Respect authority, do not subvert it or deny it. A woman refusing to stand when the judge walks into the courtroom is violating the authority foundation. So is someone who, on his first day at a job, loudly proclaims that his managers don’t understand their business and he has a better business plan. It used to be that a sports star refusing to kneel during the national anthem would have been a good example, but that has gotten into the culture war blender and now it’s just another tribal marker. Again, to really trigger this one, you have to find a true figure of authority for you. How about someone interrupting a scientific conference on genetics with screams that the study of genetics is racist? How about slapping your father (with his permission) as part of a comedy skit? If not, forget it, it’s all a theory about other people’s moral intuitions. But if you feel a pang saying it’s wrong, but think “well, it’s because it harms this or that person” then hold off on the justification.
Sanctity / degradation
Some things are sacred and should not be defiled. They have value that is not just instrumental, and degrading them with things that are unclean is wrong. One of those is your body - it’s not just a slab of meat. The vignettes above about eating the carcass of the family dog, and the sex with the chicken, are violations of the sanctity foundation. Someone peeing on a tombstone is grossly violating the sanctity foundation. So is someone spraying graffiti on Half Dome. If you want to trigger this foundation, you need to find what is sacred to you. Churches might not activate Sanctity for liberals, but pristine forests could. For me, it’s very hard to scribble in a book, and tearing out a page is unthinkable, because books hold some sacred value to me. Religion is full of sanctity of course, so speaking the name of God inside a toilet is forbidden, or even placing a bible on the floor.
Why not accept oligarchs' money for good causes? Because of the Sanctity foundation! Their dirty money would defile your charity.
(There’s a much more comprehensive set of 132 scenarios and where they fall on the different Moral Foundations here.)
I don’t think this list of Moral Foundations is exhaustive. To me personally, for example, wasting food feels wrong. Which moral foundation covers this? (The Ashkenazi moral foundation?) None of them, I think. It’s just an intuition I have. Still, the foundations are helpful in cataloging many moral intuitions. Haidt himself later tacked on an additional foundation for Liberty/oppression.
III - What Is Morality Good For?
All of these moral foundations seem suspiciously like norms that are beneficial for a community to have. The Care / harm foundation makes us care for others - often those helped gain more than the helpers lose, so the community at large is better off. The Fairness / cheating foundation makes us punish cheaters, liers and swindlers, so they don’t swindle anyone else again. The Loyalty / betrayal foundation makes us encourage those who help the group and punish those who betray it, contributing to its survival. The Authority / subversion foundation helps us navigate hierarchical social structures, and so construct more stable hierarchies. The Sanctity / degradation foundation unites us around the same sacred objects and in the same sacred customs.
This is true not just for these foundations. Anyone who has looked at moral norms with fresh eyes recognizes the very strong correlation between what is considered moral and what helps a society thrive. There’s no standard moral imperative to make as much money as you can or be as happy as you can, or learning to whistle - otherwise morality would be fun. Instead, it’s about suppressing the individual interest in favor of the group.
In other words, morality confers a strong group-level advantage. This advantage must be pretty strong, because morality is everywhere - we never see a society without it. But this leads us to a conundrum.
In the same vein, my community might be better off if every time I discover someone has swindled me for five bucks, I challenge them to a duel. But I won’t survive very long before I die in a duel, leaving no offspring to have the same tendency. Evolution works at the individual level (or at the kin level at most), where genes are shared, not at the community level. Which means this tendency will be strongly selected against. So how did we develop these strong tendencies? Some scholars think it’s a byproduct.
This is Haidt’s short answer:
And here is his long and interesting answer, which is worth quoting at length.
Okay, that was a cool biology lesson, but what does it have to do with the topic at hand? Why would this group-level advantage be selected for, instead of defectors and free-riders benefiting? In a sense, he claims, we are like bees. Not perfectly - obviously most of us produce offspring, not just the queen - but enough that it matters.
Natural selection doesn’t occur at the individual human level, it occurs at the cell DNA (or even gene) level. But we still don’t see a single human liver cell trying to go it alone and reproduce as much as it can. Actually, sometimes we do - that’s cancer, and the other cells quickly destroy that cell to help the group. Haidt says selection occurs at multiple levels simultaneously, but as long as free riding can be suppressed, the highest level is the most important one. In what cases can such free riding at the group level be so effectively suppressed, without individuals sharing their identical genes? Humans might be the best (or even only) example.
In other words, morality’s function is to suppress free-riding so effectively, that group selection becomes important, and human societies can reap the benefits of cooperation and division of labor.
Humans have been able to suppress free riding effectively, through morality and a strong emotional connection to the group, resulting in ultrasociality. Haidt dubs humans “homo duplex: 90% chimp, 10% bee”.
He references Emile Durkheim, a sociologist who argued that many important facts about our lives are social facts, irreducible to facts about individuals - they have to be viewed through the lens of relationships between individuals. Also, the perspective on psychedelics as “flipping the hive switch” - creating a feeling of unity with all people or living beings, facilitating a community - is a fascinating one. It is very different from previous accounts I’ve heard, which focused on how they affect your belief landscape (again, focusing on the rider). Still, I’m not fully convinced of the logical path between dancing ecstatically around a bonfire and a suppression of free-riding effective enough that group selection becomes powerful, since the connection is temporary.
IV - A Team Sport
Morality is complicated. Let’s take a break to talk about football. Bear with me for a second.
I want to take a moment to say that I listened to the audio version of the book, which is narrated by Haidt himself, so you get the bonus of listening to him sing the UVA fight songs with exactly the fervor you’d expect from a university professor singing a football fight song. But let’s continue.
And that completes the best explanation I’ve seen for the popularity of sports. It’s not just a loyalty rush - it’s the hive switch flipped on. It’s a belonging rush, being lost in a group in an ecstatic moment of unity. It’s dancing around the tribal bonfire. We’re hard-wired to want these specific feelings.
(But that doesn’t explain why people obsess about the statistics and the scores outside game time, alone on their computers? That still stumps me. Maybe I wasn’t dreaming and everyone secretly does love statistics, they’re just looking for a socially accepted way to express it?)
Anyway, Haidt continues:
Now that we’ve set the stage, let’s tackle religion. First, before pronouncing judgment, a cause for pause. Chesterton's fence is the principle that reforms should not be made until the reasoning behind the existing state of affairs is understood. That is, if you see something well established and don’t know why it’s there, you shouldn’t remove it. For example, if you buy farmland and see that someone has put up a fence, don’t remove it before you know why they did it. A related principle is that if you see something well established come up everywhere, even if you think you know the reasons why it’s there and they don’t apply, you still shouldn’t remove it - you’re probably mistaken about the actual reasons. If every farmer you’ve ever known has a fence, even if they tell you it’s against ghosts, or the evil eye, or the Mongols - for the love of god don’t get rid of your fence. This is how I interpret some of the arguments for metis in Seeing Like A State. Especially if that fence is the product of cultural evolution. Religion and a moral community are The Secret of Our Success. Now, turning to religion. Religion pops up everywhere - there’s scarcely a tribe or culture without it. So even if it’s ostensibly about ghosts or the evil eye, supernatural agents, or other false facts, you should be very careful before pronouncing we know all about it and throwing it in the bin.
Many of the New Atheist objections to religion center on the (lack of) evidence for the existence of supernatural beings. Religions are clearly based on many false facts. (Or at least most religions must be, since they contradict each other. There could still be One True Religion which got everything right). Isn’t it just good epistemic hygiene to get rid of everything downstream of those facts? But what if the false facts are neither the reason for religion’s existence, nor its main function? Haidt pulls the rug from under many of these objections to religion not by denying that the facts of supernatural agents are false - but by saying that they are completely irrelevant! It would be missing the point to try and reduce religion to the individual (or epistemic) level, instead of viewing it as a social fact.
All the ink spilled in arguing about those facts’ epistemic truth value is wasted. First, facts are for the rider, and religion is a choice by the elephant. But second, that’s not even the real essence of religion. That’s just some fabrication by the internal press secretary. Have you noticed how spectacularly ineffective discussions about epistemic facts are in changing people’s religiosity? You’ll never convince someone to let go of what religion gives them by citing some archeological evidence. To understand the actual psychology and function of religion, it would be helpful to trace its origins
So how did religion emerge? The New Atheists vote byproduct. We evolved to have a hypersensitive agency detection module, which tends towards false positives (thinking a log is a tiger) rather than false negatives (thinking a tiger is a log) for obvious survival reasons. It conferred a real benefit. But that module sometimes misfires, making us think that thunder and lightning are caused by gods.
Haidt does not dispute this. OK, so this is how the initial religious beliefs come to be in the mind of one person. But how and why do they spread? Why don’t we each have our own idiosyncratic religion? To Dennett and Dawkins, religions are a kind of mental virus or parasite, which undergo Darwinian selection on the basis of their ability to survive and reproduce themselves in other minds. Like a virus, “they make their hosts do things that are bad for themselves (e.g., suicide bombing) but good for the parasite (e.g., Islam).”
Haidt makes the case that religions are not in fact parasites, but that they confer strong advantages on the group, very similar to the decisive advantages of morality discussed above. They help bind a group and create a moral community. Far from being wasteful misfirings, or harmful parasites - they are load-bearing beams. And they became that way through cultural evolution:
Haidt goes further, and proposes gene-culture coevolution. Religions become more advantageous and people become more susceptible to religion. For example, when infidels are cast out, genes for tribalism will be selected for. More tribal people will create and enforce norms more vigorously, making tribal tendencies more adaptive, thus creating a positive feedback loop. This could explain why we tend to be so tribal, irrespective of the specific tribe. It seems like we actively seek a group (or groups) to identify with and defend - it’s so persistent it does seem to be hard-wired.
If religion really is beneficial, we should see many cases where religions confer advantage on their community. Haidt makes a strong case for the benefits of religion. I’ll cover this in depth, since it was very interesting and a fresh perspective, and also full of fascinating examples.
Haidt cites many such examples. The central one is that gods enable the creation of a moral community.
Another useful feature of gods is collective punishment, making the community enforce norms more rigidly. Gods can help enforce contracts - the libertarian holy grail: “temples often served an important commercial function: oaths were sworn and contracts signed before the deity, with explicit threats of supernatural punishment for abrogation.” Note that belief in divine retribution - not the actual retribution - is enough for the contract to be fulfilled. Gods can help increase trust. This trust helped Jews and Muslims excel in long-distance trade in the medieval world. Even today, the diamond market which requires very high trust is dominated by religiously bound ethnic groups, such as ultra-Orthodox Jews, which share trust that reduces monitoring costs.
I want to propose another example, which is the Afghani Taliban. Although massively underfunded and underequipped, it vanquished the Afghan army within a few weeks. You could make excuses for why the Taliban was succeeding before - their guerilla job of striking anywhere was easier than the army’s job of maintaining security everywhere. But then why did they overtake the country so easily? Because their (much stronger) religious belief makes them cohesive, in a way that the rest of the Afghan nation - united by their (admirable!) will to allow their girls to go to school and their women to walk free - were not. Even before the recent conquest, the Taliban enjoyed surprising levels of popular support (for cruel terrorist fanatics). Their Sharia courts, for example, had a reputation for being harsher but less corrupt than the government’s.
But these all seem sort of like anecdotes. Can we do something more quantitative? Ideally, we would like to form hundreds of communities, and then randomly assign each one to be either religious or not, and see whether they are still cohesive and functional a few generations later. That experiment has been conducted, minus the random assignment, in 19th century communes.
I’m all for atheism and rationality. I think they’re correct. But the communities they create in real life are not as strong and all-encompassing as religious communities. Binding society together is very important. As social conservatives realize - you don’t help the bees by destroying the hive. If you abolish gender, you don’t know what’s on the other side for society, so you better tread carefully. Now, maybe you realize all this. You may think that despite the binding effect, religion is a net negative. Or that religion was a kind of crutch, but we’ve grown out of it and no longer need it to foster cooperation. Then religion could end up being net negative. But let’s disabuse ourselves of the notion that religion is about believing in supernatural beings.
*
If there’s one thing I think Haidt would have you take away from the book, it’s that morality and religion are not some cute quirks that we happen to have - they’re the fabric of our society and existence. I don’t think this is all true, but I do think it’s a very interesting perspective about reason, morality, society and religion. If your elephant is intrigued, there’s more good stuff in The Righteous Mind.