As I understand it, tribalism here, is being defined as a significantly increased probability of cooperating with ingroup members, regardless of past action, and a significantly increased probability of defecting against outgroup members, regardless of past actions. While the original piece focuses on the negatives of tribalism, I would like to focus on the positives of in-group cooperation.
Tightly cooperating in-groups, historically, have been the most effective social structure for accomplishing goals. I don't believe it's a coincidence that the most tribal group in modern society, the military, is also the most effective at achieving specific goals dictated by its leadership. Other, more tribal groups, such as religious orders, also seem to be more effective at achieving goals. On the other hand, there are studies that show that children with autism have trouble forming shared goals, and this leads to a decline in the ability to cooperate to achieve shared goals. If anything, I would say that rationalists are not tribal enough, and in fact, there are initiatives to increase tribalism in order to make groups of rationalists more effective at achieving their goals.
Moreover, as Benjamin Ross Hoffman points out, lack of tribalism often leaves you open to arbitrage by competing outgroups who take advantage of your members' propensity to cooperate by either reducing their own efforts or even profiting by actively negating your ingroup's efforts.
Finally, it's not clear to me that improvements in mental health would be connected to reduced tribalism. Mental health is strongly tied to strong social connections, and strong social connections are often the result of tribalist communities. It's no coincidence that communities with stifling levels of religiosity or ideological alignment also seem to have the happiest members. Conversely, people with greater levels of depression cooperate less. It's not clear to me that improving mental health would reduce levels of tribalism. In fact, it might have just the opposite effect.
I agree that tribalism is a deep rooted part of the human psyche, and as a result, don't see it going away anytime soon either. However, I'm not sure that we even want to get rid of tribalism. I think that tribalism forms the basis of a number of cooperative behaviors that promote positive mental health and achievement of shared goals. I believe that it's far more productive to try to find ways to redirect tribalist impulses towards positive ends than to try to eliminate tribalism altogether.
I agree that in-group bonding is a good and valuable thing, but it's not obvious to me that it could not be separated from out-group aggression (which is what I meant by tribalism). At least I have personally been a part of several groups that seemed to have strong in-group bonding but little aggression towards other groups, which at least felt like it was in part because the outgroups didn't present any threat.
E.g. participating in any events where I get a sense of "these are my kinds of people" tends to produce a strong feeling of in-group liking, and the effects of that feeling are oriented purely inwards, towards other people present at the event, without producing any desire to leave the event to harass people from outgroups. (Nor does there seem to be any such effect after the event.)
Setting up a common enemy is an excellent way to engender cooperation between two competing groups. While this common enemy does not necessarily need to be a third group, the feeling of uniting against a common external threat is a powerful motivator, which can drive groups to do truly great things. We didn't land on the moon because of inward focused warm fuzzies. We landed on the moon to show the Soviet Union we were better at rockets than they were.
In fact, the fact that there's no Sputnik like threat warning for AGI is probably the reason that AI X-Risk research is so neglected. If we could set up an external threat on the order of Sputnik or Einstein's letter warning of German efforts to build an atomic bomb, we'd be making huge strides figuring out whether building a friendly AI was possible.
It feels noteworthy that your historical examples are going to the moon and making the atomic bomb: the first was something that was found to be of so little practical value that it was basically just done a few times and then given up after all the symbolic value had been extracted from it, and the second was a project explicitly aimed at hurting the outgroup.
So uniting against a common enemy may drive people to do difficult things, but the value of those things may be mostly symbolic or outright aimed at being explicitly harmful.
(Though just to check, I think we don't actually disagree on much? You said that "it's far more productive to try to find ways to redirect tribalist impulses towards positive ends" and I said that "in-group bonding is a good and valuable thing, but it's not obvious to me that it could not be separated from out-group aggression", so both of us seem to be in agreement that we should keep the good sides of in/out-group dynamics and try to reduce the bad sides of it; I just define "tribalism" as referring to purely the negative sides, whereas you're defining it to refer to the whole dynamic.)
Identifying this as 'high value' reminds me of a bit from Hamming's You and Your Research:
The three outstanding problems in physics, in a certain sense, were never worked on while I was at Bell Labs. By important I mean guaranteed a Nobel Prize and any sum of money you want to mention. We didn't work on (1) time travel, (2) teleportation, and (3) antigravity. They are not important problems because we do not have an attack. It's not the consequence that makes a problem important, it is that you have a reasonable attack. That is what makes a problem important.
While there's definitely value is saying "hmm, this fruit looks tasty" even if there's no obvious path to the fruit, in the hopes that someone else sees some easy way to get it, I think most people use 'high value cause area' to instead mean something like 'high profit cause area'--the value of working on that cause is high, as opposed to the value of completing the cause being high.
I think I'm deeply pessimistic about our prospects for reducing tribalism in the short-term; the mechanisms for previous approaches were things that take historical timescales to have visible effects. The idea of widespread positive psychology is also not terribly new; I view most religions and philosophies as attempting something in this vein, and they're engaged in memetic warfare with ideologies that encourage tribalism.
Hamming's examples seem to be in a different category, in that in those nobody has any clue of what could even plausibly lead towards those goals. (I believe: I'm not terribly familiar with physics.) Whereas with tribalism, there seem to be a lot more leads, including i) knowledge about what causes it and what has contributed to changes of it over time ii) research directions that could help further improve our understanding of what causes it / what doesn't cause it iii) various interventions which already seem like they work in a small-scale setting, though it's still unclear how they might be scaled up (e.g. something like Crucial Conversations is basically about increasing trust and safety in one-to-one and small-group conversations).
I would say that this definitely looks at least as tractable as, say, MIRI-style AI safety work: that is, there are lots of plausible directions through which one could attack it, and though none of them seems likely to work out and provide us a solution in the short term, it seems plausible (though not guaranteed) that they would allow us to figure out a solution in the long term.
knowledge about what causes it and what has contributed to changes of it over time
I feel like there's conservation of expected evidence stuff going on there--it's because we know a lot about how gravity works that we think anti-gravity is impossible. Similarly, I'm pessimistic about extending trendlines from the past because of the likely causal factors of those trendlines.
For example, it looks to me like a pretty large part of the reduction in tribalism was a smoothing of genetic relatedness. It's not just that cousin marriage bans reduce clannishness, because there are fewer people less than three cousins away from you, but also that your degree of relatedness to the average person in your society goes up--you have more fourth cousins. But the free movement of people cuts against that trend; decreasing ethnic homogeneity also decreases trust. Which puts us in a bind--there are many nice things about the free movement of people, and paying attention to ethnic homogeneity is itself a tribal signal.
I would say that this definitely looks at least as tractable as, say, MIRI-style AI safety work: that is, there are lots of plausible directions through which one could attack it, and though none of them seems likely to work out and provide us a solution in the short term, it seems plausible (though not guaranteed) that they would allow us to figure out a solution in the long term.
I think a core difference between MIRI-style AI safety work and this is that MIRI is trying to figure out what non-adversarial reasoning looks like in a mostly non-adversarial environment, whereas the 'non-tribal' forces have to do their figuring out in a mostly adversarial environment.
For example, one thing that you identify as a cause of tribalism is threat; when people feel less secure, they care more about who their friends and enemies are, and who can be relied on, and similar things. One might hope that we could put out messages that make people feel more secure and thus less likely to fall into the trap of tribalism. But those messages don't exist in a vacuum; those messages compete with messages put out by tribalists who know that threat is one of their major recruiting funnels, and who thus are trying to increase the level of threat. This direct opposition is a major factor making me pessimistic.
I feel like there's conservation of expected evidence stuff going on there--it's because we know a lot about how gravity works that we think anti-gravity is impossible. Similarly, I'm pessimistic about extending trendlines from the past because of the likely causal factors of those trendlines.
Much of the recent increase in tribalism seems to have been driven by the rise of social media, though; and it's highly unobvious that you couldn't have a form of social media that didn't contribute to equally toxic dynamics. Similarly, although it remains debated whether or not depression etc. have actually become more common or not, it seems like a reasonable guess that they might have.
It's not likely gravity where we've gotten most of the domain figured out and we're not coming up with anything new; rather, the domain is changing all the time, and there are various identifiable factors that are contributing to the problem and pushing it in different directions, and these have changed over time due to various causal forces that we can identify.
I think a core difference between MIRI-style AI safety work and this is that MIRI is trying to figure out what non-adversarial reasoning looks like in a mostly non-adversarial environment, whereas the 'non-tribal' forces have to do their figuring out in a mostly adversarial environment.
Clearly there is some degree of adversariality going on with the problem. But while there are a lot of people who benefit from tribalism to some extent, it doesn't seem obvious that they wouldn't turn from adversaries to allies if you gave them an even better solution for dealing with their problems.
E.g. I spoke with someone who had done research on some of the nastier SJW groups and had managed to get into some private Facebook groups from which outrage campaigns were being coordinated. He said that his impression was that much of this was being driven by a small group of individuals who seemed to be doing really badly in life in general, and who were doing the outrage stuff as some kind of a coping mechanism. If that's correct, then while those people would probably like to maintain tribalism as a way to maintain their coping mechanism, even they would probably prefer to have their actual problems fixed. And in fact, if the right person just approached them and offered to help them with their actual problems, it's unlikely that they'd even perceive that as being in opposition to the outrage stuff they were doing - even if getting that help would in fact cause them to lose interest in the outrage stuff.
Also, I obviously don't have a representative sample here, but it feels like there are a lot more people who hate the tribal climate on social media than there are people who would like to maintain it. Most people don't know what to do about it (and in fact aren't speaking up because they are scared that they'd become targeted if they did), but would be happy to help out with reducing it if they just knew how.
A possible next step conclusion one could draw from this is that it is worth expending effort to make people around you feel safe. As Vaniver mentions it's generally the combination of value and an approach that leads to calling something a high value cause, but as you mention we do have some experience with reducing tribalism in our lives.
I'd be interested in seeing some off the cuff evaluations of being patient with people to see if there's a reasonable upper and/or lower bound on how patient we should be but I have no idea how to even figure out what numbers I'd need to make up to do that evaluation myself.
There are two potential aims one might have:
1. Try to reduce the amount of tribalism in society at large
2. Try to make the smallish communities that you are in have less of the bad parts of tribalism
These two aims seem to lead to different analyses and potential courses of action.
Paul Graham has the exhortation to keep your identity small I wonder if that is something that can actually be taught to people. And whether it would stick while there is still a benefit to adopting a group identity.
There's some wisdom in Paul Graham's advice, but I'd be wary of promoting it too much: I think that it's easy for people to take it too far, to the point where it starts causing psychological damage. At least in my experience, there's a connection between identity and motivation, and if you try to make your identity too small, you also start to suffer from a lack of motivation and feelings of pointlessness.
(I've been working on an essay called "keep your identity strategic, not small", but haven't gotten it into a satisfactory shape; but this Melting Asphalt essay touches on some of the same points)
FYI, this essay exists:
https://www.lesserwrong.com/posts/uR8c2NPp4bWHQ5u45/strategic-choice-of-identity
I wrote https://thingofthings.wordpress.com/2017/04/10/keep-your-identity-large/ on a similar topic which you might find interesting.
I think that tribalism is one of the biggest problems with humanity today, and that even small reductions of it could cause a massive boost to well-being.
By tribalism, I basically mean the phenomenon where arguments and actions are primarily evaluated based on who makes them and which group they seem to support, not anything else. E.g. if a group thinks that X is bad, then it’s often seen as outright immoral to make an argument which would imply that X isn’t quite as bad, or that some things which are classified as X would be more correctly classified as non-X instead. I don’t want to give any specific examples so as to not derail the discussion, but hopefully everyone can think of some; the article “Can Democracy Survive Tribalism” lists lot of them, picked from various sides of the political spectrum.
Joshua Greene (among others) makes the argument, in his book Moral Tribes, that tribalism exists for the purpose of coordinating aggression and alliances against other groups (so that you can kill them and take their stuff, basically). It specifically exists for the purpose of making you hurt others, as well as defend yourself against people who would hurt you. And while defending yourself against people who would hurt you is clearly good, attacking others is clearly not. And everything being viewed in tribal terms means that we can’t make much progress on things that actually matter: as someone commented, “people are fine with randomized controlled trials in policy, as long as the trials are on things that nobody cares about”.
Given how deep tribalism sits in the human psyche, it seems unlikely that we’ll be getting rid of it anytime soon. That said, there do seem to be a number of things that affect the amount of tribalism we have:
* As Steven Pinker argues in The Better Angels of Our Nature, violence in general has declined over historical time, replaced by more cooperation and an assumption of human rights; Democrats and Republicans may still hate each other, but they generally agree that they still shouldn’t be killing each other.
* As a purely anecdotal observation, I seem to get the feeling that people on the autism spectrum tend to be less tribal, up to the point of not being able to perceive tribes at all. (this suggests, somewhat oddly, that the world would actually be a better place if everyone was slightly autistic)
* Feelings of safety or threat seem to play a lot into feelings of tribalism: if you perceive (correctly or incorrectly) that a group Y is out to get you and that they are a real threat to you, then you will react much more aggressively to any claims that might be read as supporting Y. Conversely, if you feel safe and secure, then you are much less likely to feel the need to attack others.
The last point is especially troublesome, since it can give rise to self-fulfilling predictions. Say that Alice says something to Bob, and Bob misperceives this as an insult; Bob feels threatened so snaps at Alice, and now Alice feels threatened as well, so shouts back. The same kind of phenomenon seems to be going on a much larger scale: whenever someone perceives a threat, they are no longer willing to give someone the benefit of doubt, and would rather treat the other person as an enemy. (which isn’t too surprising, since it makes evolutionary sense: if someone is out to get you, then the cost of misclassifying them as a friend is much bigger than the cost of misclassifying a would-be friend as an enemy. you can always find new friends, but it only takes one person to get near you and hurt you really bad)
One implication might be that general mental health work, not only in the conventional sense of “healing disorders”, but also the positive psychology-style mental health work that actively seeks to make people happy rather than just fine, could be even more valuable for society than we’ve previously thought. Curing depression etc. would be enormously valuable even by itself, but if we could figure out how to make people generally happier and resilient to negative events, then fewer things would threaten their well-being and they would perceive fewer things as being threats, reducing tribalism.