As a side note, I run a thing that's like a book club but different and we're talking about The Righteous Mind on Saturday 1/22. We have room for a few more thoughtful people—feel free to message me if you're interested.
The key thing is that it's low-commitment / low-guilt. I was inspired to start it by a friend who started a book club during the pandemic, fell catastrophically behind on the reading, and ultimately ended up ghosting her own book club.
I've noticed that book clubs tend to become machines for making people feel guilty / overloaded, so I tried hard to avoid that. We do a book every 2 - 3 months, and the default expectation is that people won't attend unless that specific book is interesting to them.
Shortly before the discussion, I send out a summary of the book (which was my motivation for writing this), so that people can attend and participate without needing to finish (or even start) the book.
It's still a fairly new endeavor, but it seems to be working so far.
I feel like "hive mode" might be a gear my brain is missing. Maybe I've just never engaged in the right experience, but I've been in crowds, congregations, and concerts without ever particularly feeling subsumed into a larger whole.
So I'd favour also the continued existence of communities for people who find hive mode spooky and off-putting.
To a greater or lesser extent, I think that's true for many of us here. Which is a good thing in some ways, but can make it challenging to fully understand and engage with people who are more hive-oriented.
I was inspired to read Jonathan Haidt’s The Righteous Mind by Julia Galef’s excellent interview with him. It’s an outstanding book that greatly improved my understanding of how humans understand morality and how to make moral arguments that humans will actually listen to.
More generally, I think it sheds some light on the challenges we face as we try to build a culture that is rational and ethical but still resonates with real humans.
Epistemic status
“All models are wrong but some are useful.” - George Box
Righteous Mind is a layperson’s introduction to Moral Foundations Theory, which as far as I can tell is broadly consistent with mainstream moral psychology but is just one of a number of competing theories.
I consider Haidt’s model of morality to be interesting, plausible, and useful. Some parts—most notably the evolutionary origins of certain instincts—are best regarded as interesting speculation.
Overview
You’re probably familiar with the idea that our eating behaviors are driven largely by instincts that evolved for a world very different from the one we currently live in. Our ancestors lived in a world of chronic food scarcity, and they evolved a powerful drive to consume high calorie foods whenever they got the chance. Even though there’s a McDonald’s on every street corner, our instincts drive us to eat as though we never know where our next meal will come from.
Our brains aren’t terrible at eating—they’re exquisitely well tuned for eating in an ancestral environment that was completely different from the modern world.
Haidt posits that morality, like eating, is an instinctive behavior. Morality evolved to help our ancestors navigate the social structure of hunter gatherer society and only makes sense when examined in that context.
To understand why humans routinely make terrible food choices—and to change those choices—you need to understand the evolutionary basis of our eating instincts. By the same token, to understand why humans routinely make terrible moral choices—and to change those choices—you need to understand the evolutionary basis of our moral instincts.
Part 1: the rider and the elephant
A quick review of Type 1 and Type 2 thinking
You’re probably already familiar with Dual Process Theory, which identifies two types of thinking:
The elephant
Haidt compares the moral mind to an elephant with a rider on top. Most moral thinking is performed by Type 1 processes (the elephant). As you go about your day, the elephant is constantly making moral judgments about everything it sees. If someone drops an empty coffee cup on the sidewalk in front of you, the elephant instantly feels moral outrage.
The foundations of morality are instinctive, but the details are culture-specific. Children have an instinctive drive to learn from their parents which foods are good to eat, and which behaviors are moral. Once those cultural beliefs are learned, they become embedded in our Type 1 moral thinking.
You (probably) learned as a child that bugs are not food and littering is wrong. Those cultural norms are so deeply embedded in your mind that you are instantly disgusted when you see someone eating a bug and outraged when you see someone littering. People from other cultures, however, learn entirely different cultural norms about food and morality.
The rider
The rider on the elephant’s back is Type 2 moral thinking. When you consider the question of whether a hungry person is morally justified in stealing bread, that’s the rider at work.
But here’s the critical part: it’s a natural mistake to think the rider evolved to help make better decisions and steer the elephant in the best possible direction. Haidt argues, however, that the rider evolved primarily to help navigate complex social situations. The rider isn’t a CEO, but rather a press secretary whose job is to convince others that your actions are correct and that they should take the actions you want them to take.
If you’ve ever found yourself explaining to someone that yes, you’re cutting back on sugar but it would offend your host if you didn’t take a giant slice of cheesecake, you’ve seen the rider at work. The elephant decided to have cheesecake and the rider was put in charge of coming up with a story that would convince everyone (including you) that having cheesecake is a good idea.
Morality works the same way: most of the time, your moral decisions are made by the elephant. You see someone drop their coffee cup on the sidewalk and the elephant is instantly outraged. The rider sees that the elephant is outraged and starts working on a press release about why littering is wrong and please pick that up and put it in the trash.
There’s a fun section in the book where Haidt describes a series of experiments where people were presented with hypothetical scenarios and asked to explain why the actions in the stories were moral or immoral. Their Type 1 thinking instantly made a decision, and their Type 2 thinking came up with (sometimes hilarious) post-hoc justifications for what their instincts had already decided (according to one person, it’s morally wrong to cut up an American flag and use it to clean your bathroom because it might clog the toilet).
Part 2: There’s more to morality than harm and fairness
All humans are born with taste buds that detect five fundamental flavors (salty, sweet, bitter, savory, and sour) and all human cuisines are guided by those five flavors. Different cultures, however, have created very different cuisines based on those five foundations.
Similarly, Haidt posits that humans are instinctively wired to understand six moral foundations, but different cultures put those foundations together to form very different moral matrices. He’s not dogmatic about exactly what the foundations are: he proposes these six as a starting point, but is open to revising the list based on further data.
He also proposes a specific evolutionary origin for each foundation, which I think is useful as a way of understanding them. Evolutionary psychology is a highly speculative endeavor at the best of times, so I wouldn’t get too hung up on the accuracy of those origin stories.
Foundation 1: Care versus harm
The Care foundation drives us to take care of others and not harm them. If you feel a moral imperative to feed starving children, that’s the Care foundation at work. Haidt speculates that Care evolved as a mechanism for ensuring that children are cared for in hunter gatherer bands.
Foundation 2: Liberty versus oppression
Liberty emphasizes the right to be free from external violations of your autonomy. It’s what makes you angry if your company imposes a new dress code on you. Haidt speculates that Liberty evolved as a counterbalance to the rise of hierarchy in hominid society.
Foundation 3: Fairness versus cheating
The word fairness gets used to refer to many sometimes-incompatible concepts, so it's easy to get confused by this one. In Haidt’s framework, Fairness is similar to the popular understanding of karma: people who work hard should reap the rewards of their efforts and people who don’t work hard should not benefit from other people’s work.
Equity as it’s currently understood in social justice circles is associated with Care and Liberty rather than Fairness (i.e., equity is the idea that everyone deserves Care as opposed to the idea that everyone deserves the fruits of their labor).
Haidt speculates that Fairness evolved as a way to enable the benefits of cooperating with others without being taken advantage of.
Foundation 4: Loyalty versus betrayal
The Loyalty foundation is exactly what it sounds like: it emphasizes group identity and loyalty to the group. Haidt speculates that it evolved as a way of enabling small tribes to operate as cohesive groups.
Foundation 5: Authority versus subversion
The Authority foundation is somewhat in tension with the Liberty foundation. I think of it as a Confucian virtue: it emphasizes the importance of everyone playing their correct role in a hierarchical society. This is a parochial understanding of hierarchy: superiors have responsibility for the well-being of their subordinates as well as authority over them.
Haidt speculates that the Authority foundation evolved as a way of enabling groups to benefit from the stability of hierarchical social structures without descending into tyranny.
Foundation 6: Sanctity versus degradation
Sanctity holds certain ideas and objects to be sacred, and others to be profane. It manifests in religious beliefs about food and cleanliness, as well as the belief that things like flags should be treated with reverence. Haidt speculates that Sanctity evolved as a way of propagating cultural wisdom about avoiding unsafe foods and people with communicable diseases.
Liberals and conservatives
The most widely-discussed part of the book is Haidt’s assertion that Western liberals are unlike almost everyone else on the planet (including Western conservatives) in using only some of the moral foundations. Liberal morality is based on Care, Liberty, and (ambivalently) Fairness, but disregards Loyalty, Authority, and Sanctity.
A key insight here is that all humans instinctively respond to all six of the foundations, even if our formal moral code doesn’t recognize all of them. Becoming a vegetarian doesn’t magically eliminate your instinctive preference for the taste of glutamates, and becoming a liberal doesn’t magically eliminate your instinct to care about hierarchy.
As a practical matter, Haidt believes conservative politicians have an intrinsic advantage over liberal politicians because they speak to voters using all six of the moral foundations rather than just two and a half. It follows that liberals will be more effective in moral debates if they learn to understand and speak to all six of the foundations.
Part 3: Morality binds and blinds
Haidt’s core metaphor for part 3 is that humans are 90% chimp and 10% bee. Most of the time, we are like chimps: a group of individuals who happen to live in a complex society together. Under certain circumstances, however, the “hive switch” gets flipped and we become like a hive of bees, a superorganism that acts with a single common purpose and can accomplish things no group of individuals ever could.
Our moral instincts evolved to support this dual nature: morality advances our individual interests within a group, but can also bind us together as a hive.
The hive switch is most easily activated by coordinated movement, chanting, and singing. If you’ve ever lost yourself in the exhilaration of chanting for your favorite sportsball team or while dancing at a rave, you know what hive mode feels like. Soldiers marching in formation are practicing hive mode.
This, Haidt argues, is the true purpose of religion: the metaphysical beliefs are just incidental window dressing. Religion is an instinctive behavior that helps us flip the hive switch by dancing and singing together, thereby binding ourselves into a superorganism that can outcompete lesser tribes of mere individuals. If you've ever been part of a team that moves as a single unit, you know how good this feels, and how much it can accomplish.
The dark side, of course, is that bees don’t selflessly serve the common good—they merely serve the interest of their hive. When hive mode goes well, humans come together as a cohesive group that accomplishes great things and sometimes sings rude songs about the other team. When it goes badly, hive mode builds death camps.
Some closing thoughts
Much of the press about The Righteous Mind has focused understanding the divide between liberals and conservatives. People talk about how it provides a framework for understanding people whose moralities are different from yours, and for making moral arguments that might actually resonate with anyone who isn’t in your echo chamber.
That’s all true, and there’s certainly value there. But I think that narrow focus misses some of the most valuable lessons of Righteous Mind.
I’m hardly the first person to observe that liberals—and especially rationalists—seem to be missing something important. We’ve gotten rid of Imaginary Sky Friend and patriotism and militarism because obvs, but we haven’t yet figured out how to fill the important human needs those things filled. We’re still in the early stages of trying to figure it out: we light candles, and we build barracks, and we write long essays on ethnogenesis.
I think the framework provided by The Righteous Mind has something valuable to offer us as we work to build a better culture. Any successful culture needs to meet the needs of real humans, and it’s a regrettable but true fact that real humans are driven by instincts that don’t necessarily make sense in the modern world.
If you design food for abstract, two dimensional humans, you get Meal Squares. And if you design culture for abstract, two dimensional humans you get—well, you get what we have now.
The challenge before us is to create a culture that is rational and ethical but still meets the needs of real humans. Just as we need food that is delicious and nutritious but doesn’t give us type 2 diabetes, we need communities that are compelling and nourishing without producing death camps as an occasional side effect.
We’ve got a lot of work to do.