Lately I've come to think of human civilization as largely built on the backs of intelligence and virtue signaling. In other words, civilization depends very much on the positive side effects of (not necessarily conscious) intelligence and virtue signaling, as channeled by various institutions. As evolutionary psychologist Geoffrey Miller says, "it’s all signaling all the way down."
A question I'm trying to figure out now is, what determines the relative proportions of intelligence vs virtue signaling? (Miller argued that intelligence signaling can be considered a kind of virtue signaling, but that seems debatable to me, and in any case, for ease of discussion I'll use "virtue signaling" to mean "other kinds of virtue signaling besides intelligence signaling".) It seems that if you get too much of one type of signaling versus the other, things can go horribly wrong (the link is to Gwern's awesome review/summary of a book about the Cultural Revolution). We're seeing this more and more in Western societies, in places like journalism, academia, government, education, and even business. But what's causing this?
One theory is that Twitter with its character limit, and social media and shorter attention spans in general, have made it much easier to do virtue signaling relative to intelligence signaling. But this seems too simplistic and there has to be more to it, even if it is part of the explanation.
Another idea is that intelligence is valued more when a society feels threatened by an outside force, for which they need competent people to protect themselves from. US policy changes after Sputnik is a good example of this. This may also explain why intelligence signaling continues to dominate or at least is not dominated by virtue signaling in the rationalist and EA communities (i.e., we're really worried about the threat from Unfriendly AI).
Does anyone have other ideas, or have seen more systematic research into this question?
Once we understand the above, here are some followup questions: Is the trend towards more virtue signaling at the expense of intelligence signaling likely to reverse itself? How bad can things get, realistically, if it doesn't? Is there anything we can or should do about the problem? How can we at least protect our own communities from runaway virtue signaling? (The recent calls against appeals to consequences make more sense to me now, given this framing, but I still think they may err too much in the other direction.)
PS, it was interesting to read this in Miller's latest book Virtue Signaling:
Where does the term ‘virtue signaling’ come from? Some say it goes back to 2015, when British journalist/author James Bartholomew wrote a brilliant piece for The Spectator called ‘The awful rise of ‘virtue signaling.’’ Some say it goes back to the Rationalist blog ‘LessWrong,’ which was using the term at least as far back as 2013. Even before that, many folks in the Rationalist and Effective Altruism subcultures were aware of how signaling theory explains a lot of ideological behavior, and how signaling can undermine the rationality of political discussion.
I didn't know that "virtue signaling" was first coined (or at least used in writing) on LessWrong. Unfortunately, from a search, it doesn't seem like there was substantial discussion around this term. Signaling in general was much discussed on LessWrong and OvercomingBias, but I find myself still updating towards it being more important than I had realized.
So how bad can things get? Am I crazy to worry about a future Cultural-Revolution-like virtue signaling dystopia, but even worse because it will be tech-enhanced / AI-assisted? For example during the Cultural Revolution almost everyone who kept a diary (including my own parents) either burned theirs or had their diaries become evidence for various thoughtcrimes (i.e., any past or current thoughts contradicting the current party line, which changes constantly so nobody is immune). But doing the equivalent of burning one's diary will be impossible for a lot of people in the next "Cultural Revolution". Also, during the Cultural Revolution, people eventually became exhausted from the extreme virtue signaling, Mao died, and common sense finally prevailed again. But with AI assistance, none of these things might happen in the next "Cultural Revolution".
On the other side, I was going to say that it seems unlikely that too much intelligence signaling can cause anything as bad to happen, but then I realized that AI risk is actually a good example of this, because a lot of research interest in AI is driven at least in part by intellectual curiosity, and evolution probably gave us that to better signal intelligence. The whole FAI / AI alignment movement can be seen as people trying to inject more virtue signaling into the AI field! (It's pretty crazy how much of a blind spot we have about this. I'm only having this thought now, even though I've known about signaling and AI risk for at least two decades.)
I don't think you are crazy; I worry about this too. I think I should go read a book about the Cultural Revolution to learn more about how it happened--it can't have been just Mao's doing, because e.g. Barack Obama couldn't make the same thing happen in the USA right now (or even in a deep-blue part of the USA!) no matter how hard he tried. Some conditions must have been different.*
*Off the top of my head, some factors that seem relevant: Material deprivation. Overton window so narrow and extreme that it doesn't overlap with everyd... (read more)