I have a few problems with this, but most of them stem from one particular big gap in the argument, so I'm going to sort of start in on that, and then bring in the other issues as and when they feel relevant. I've gone back and forth on whether to perhaps structure this review more in a 'going through the article' way, or a sort of 'here's my list of problems' way, but neither really seemed satisfactory so I've gone for this sort of 'hub and spoke' model, which isn't so suited to linear writing but there we go.
This ended up getting long as hell, so I've put some headings in.
Core Issue: Democracy is not based on the assumption of popular rationality, so there's no particular reason rationalism should threaten it.
The core issue here is the lack of any attempt to demonstrate that anti-democracy ideas flow from the loose group of ideas popular around here they call 'rationalism' (I don't think this term is particularly endorsed by the community but I'll use it for want of a better alternative), either in terms of their being logically implied by rationalism or commonly held by those who ascribe to rationalism. It's just sort of assumed that, because rationalists believe you can be more or less rational, and some people are more rational than others, they must therefore be against democracy, at least once they fully own up to their beliefs and follow them to their logical conclusions. It is assumed that democracy must be backed up by suppositions that all people are equally rational or equally capable of governing, and that therefore swiping at this core assertion must logically weaken the case for democracy and strengthen the case for dictatorship.
This isn't factually true in terms of what people actually believe, but neither is it the case that rationalist democrats are hypocritical and/or delusional, and just emotionally resisting becoming neo-reactionaries. Belief in unequal distribution of ability to govern doesn't imply anti-democratic positions, because democracy doesn't rest on the idea that all people are equally (and highly) skilled at statecraft or rationality or governing or whatever. If anything, it's quite the reverse!
Indeed, if people were perfectly rational, altruistic, and knowledgeable, the case for democracy would collapse! What would be the point if you could just pick one person (maybe at random, maybe on a hereditary system just for simplicity of transfer of power) and have them rule benevolently and rationally? Democracy is a bulwark against the irrationality and ignorance of the individual, based on never giving any one person so much power that they can do huge damage, and 'smearing out' the effects of bias and ignorance by sampling over a huge number of people and reducing the variance in the overall results. Given the asymmetric payoff landscape, reducing variance is brilliant!
There's of course a lot of complexity I'm not going into here. Democracy isn't the only way of combining information on preferences and judgements; markets are also part of that and they do better at some things and worse at others, and democracies also delegate decision-making away from the populace and their representatives and to an expert class in all sorts of cases, and the ideal relationship between democracy, markets, and expertise can be argued back and forth forever. Classical public choice economics is, in my view, well challenged by recent work by Caplan, Hanson, and others which casts doubt on just how strongly democracy can turn individual irrationality into group rationality, and I don't want to get into that at this point.
The reason I bring it up is to show that the authors here don't get into it at all, and just seem to assume that the case for democracy rests on a strong belief in a high and somewhat equally distributed level of rationality and competence on the part of the general public, implying that the rationalism's insistence that in fact most people are extremely biased and irrational must logically swipe hard at that case and imply anti-democratic sentiment. This is the reverse of the truth!
Rationality is not Intelligence, and indeed this is a core Rationalist idea.
Branching out from this fundamental issue with the piece are a few other related problems. The piece seems to assume that rationalism's focus on the idea that most people are irrational and that becoming rational and unbiased is difficult, deliberate work and few manage it very well implies some sort of deference to the intelligent and/or academically elite (both of which are in high supply in tech/SV, therefore people should be more deferent to tech/SV/engineers/clever-clogs-types, seems to be the authors' implicit understanding of the rationalist position). But this is also not just wrong but backwards.
You can see this by looking at what leading/famous rationalists have to say about credentialed experts and clever people. One of the key ideas of rationalism is that not just even, but especially high-IQ and academically elite people can and often are very irrational and biased. The idea is that rationality isn't something you automatically get by being clever/educated, but something mostly orthogonal that one has to work to cultivate separately. The writings of notable rationalists like Yudkowsky, Mowshowitz, and Alexander are full of examples of how clever, educated people are frequently making exactly the same errors of rationality that we're all happy to point out and laugh at in anti-vaxxers, creationists, religious fundamentalists, and other such low-status people, but to much greater harm because their words carry authority.
This has been more obvious than ever during the pandemic, as rationalists in general have been relentlessly critical of clever and educated people and argued for a much more decentralised, much more fundamentally democratic attitude to prevail in which everyone should think through the implications of the data for themselves, and not just defer to the authority of intelligent, educated people. I doubt there are many people at the top of the CDC with an IQ below 130 or without a PhD, but I don't think any organisation has come in for more criticism from rationalists over the last 18 months.
Rationalism is absolutely not about deference to the clever and educated, but about recognising that even the cleverest and most educated are often highly irrational, and working hard to learn what we can from experts while having our eyes open to this fact, thinking for ourselves, and also understanding that we ourselves are also naturally highly irrational, and need to work to overcome this internally as well as just noticing the mistakes of others. In that sense, I see it is as far more rhetorically democratic than the prevailing attitude in most liberal democracies, and a world away from the kind of faith in the elite that would be required to support further centralisation of decision-making.
The dreaded race issue
I shall now be addressing the race bit. If, like me, your main reaction is a resigned 'oh, my god, not this shit again, please" then please do skip forward to the next bold bit. I'm responding because I'm already writing a response and so, for completeness, feel I should address it, but I wouldn't do so if it appeared alone because I also think it's just not really worth discussing.
This confusion of intelligence and rationality also ruins the authors attempts to slide in the dreaded race issue. The authors note that rationalists realise that some people are more rational than others, then note that there are also people who think people of some races are more intelligent than others, and so put 2 and 2 together and get 22, concluding that perhaps all these rationalists would be racists if they were honest with themselves.
This fails on so many levels it's difficult to focus on them one by one, but I'll try to do so. First off there's no evidence offered that these are the same people. Some people think there are genetically-mediated racial intelligence differences. The only two I can think of are Charles Murry and Garret Jones, and even then I'm not actually sure if either of them actually does (I'm not hugely interested in race/IQ and so haven't looked into it that much). But note neither of those people identify as rationalists. So how do we know even that the supposed pre-requisite beliefs for this racist downstream belief are even present in the same people?
EDIT: Based on comments, I withdraw this preceding criticism as not fully explored and somewhat contradictory with a later criticism.
Even assuming they are, the issue then falls down because intelligence isn't the same as rationality. Even if our straw rationalist did think there were genetically mediated racial intelligence differences, this wouldn't imply similar rationality differences.
Even assuming that it did, and that this straw rationalist is actually a straw intelligencist who thinks IQ is the sole determiner of ability to make rational decisions, then even buying all the arguments of the race/IQ chaps hook, line, and sinker (which, to reiterate, neither I nor any rationalist I've ever spoken to or read actually does), the differences are tiny and absolutely dwarfed by within-race variance anyway, so just wouldn't make much difference to anything.
EDIT: I also withdraw this above paragraph based on comments, as it's somewhat contradictory with my first (also withdrawn) paragraph, and paragraphs 2 and 4 (below) of my criticism are enough anyway.
And then there's the fact that even if we bought the argument all the way up to this point, it would still fall down on the 'X group of people isn't very rational so should be denied a voice in democracy' step, because, as I addressed first and foremost in this comment, arguments for democracy in no way rest on the assumption that the people voting are paragons of rationality - indeed quite the reverse.
So this just seems like another lazy, failed attempt to tar the world of rationalism with the brush of scientific racism, and to be honest I'm getting irritated with the whole thing by now and even half-hearted allusions to it such as this are starting to really affect my ability to take people seriously.
NRx isn't rationalism, not even a bitty bit.
But then, okay, so I think the response I'd get to my arguments so far, which are really all add up to 'none of these anti-democratic/NRx beliefs actually logically flow from the tenets of rationalism in any way', would be for someone to say okay but what about the actual NRx lads - they do actually exist right?
This is, to some extent, fair enough. There are NRx people. Not many, but then there aren't many rationalists either. But yeah there are people who think we should abolish democracy and have some combination of monarchs or CEOs ruling societies a bit like feudal kingdoms, or perhaps Prussia, or perhaps Tesla, or something. Idk man, put two NRx chaps in a room and you'll have 3 different fundamental plans for the re-organisation of society in a way almost precisely engineered to sound like hell on earth to normie libs.
But it's not like these NRx lads are 'rationalists gone bad' or anything of the sort. I've only read Curtis Yarvain and Nick Land 1st-hand, and Michael Anissimov and one other I can't remember the name of 2nd-hand in the context of Scott Alexander ripping chunks out of them for 50k words (which I'm not claiming gives me the same level of insight), and they really aren't just starting with Yudkowsky and Bayes' Theorem then taking a few extra steps most rationalists are too chickenshit to make and ending up at Zombie James I as God-Emperor-CEO.
Land is straight-up continental philosophy, which is about as close to the opposite of rationalism a'la Yudkowsky as I can think of, and generally viewed with a mixture of bafflement and derision by every rationalist I've come across. Yarvain occasionally mentions Bayes' Theorem, but in the vague sort of offhand way that loads of Philosophers of Science with all sorts of non-rationalist, non-Bayesian views do. Loads of people understand Bayes Theorem as a trivially true and useful bit of probability maths without making it the central pillar of their epistemology in the way that rationalists do. Yarvain seems to be a rationalist in so far as he has a basic grasp of probability maths and doesn't make a few basic reasoning errors that most people do if they don't think about probability too deeply. That doesn't make him a rationalist any more than the fact that my mate Wes is really keen on not falling victim to the sunk cost fallacy makes him one (he's not - the man is a fanatical frequentist).
These NRx lads aren't just not rationalists, they're one of the rationalists' favourite foils. Scott Alexander is (reluctantly, it seems) one of the most famous rationalists going, and he sort of got famous for basically writing a PhD thesis about how NRx is nuts. NRx chaps are banned from his discussion forums and from LessWrong, which is a level of exclusion the rationalist community rarely reaches for.
So, shorn of an argument that NRx/anti-democratic sentiment flows naturally from rationalist principles, and without any evidence that many rationalists become NRx, this article just falls back on the most ludicrous guilt-by-association nonsense. Apparently rationalists and NRx 'sit on the same boards' at some companies. Okay, well, I'm sure there are plenty of mainstream Democrats on those boards as well (for all the focus on the oddities, the political culture of SV/tech is overwhelmingly mainstream Democrat), so are all those mainstream Democrats also somehow secret NRx or something?
Apparently rationalists and NRx 'argue using common terminology over contrary ends', which is about the weakest claim of alignment imaginable. It's a really wordy and confusing way of saying the two groups disagree about what is desirable but are capable of communicating that disagreement in a mutually intelligible way. Fanatical pro-life and fanatical pro-choice protestors arguing over whether some particular type of abortion should be legal or illegal can 'argue using common terminology over contrary ends'. This is a statement of no substance, dripping with implied conspiracy but actually claiming basically nothing.
I will admit that NRx and rationalists sometimes talk to each other. Apparently the rationalists don't like doing it as much as the NRx do, so much so that NRx as a subject is banned from a bunch of rationalist discussion spaces, but yeah, they've expressed their disagreements in words that each other understood. Rationalists and NRx often have a decent understanding of each other's positions, and can express where they disagree. I'm just not sure how I'm supposed to step from that to thinking they're secretly in league or whatever the hell the authors are implying here.
Rationalism isn't optimisation.
The article seems to lean pretty hard on the idea that a core tenet of rationalism is reducing things to one quantity that can then be min/maxed to get the desired result. This couldn't be more wrong. One of the most important things I've learned from reading rationalists is how doing this can lead to huge problems. Alexander and Mowshowitz never stop banging on about how relentless optimisation pressure destroys value all over the place all the time, and Bostrom and Yudkowsky have basically been arguing for two decades that optimisation pressure may well destroy all value in the universe if we aren't careful.
I've learned more about the dangers of optimisation from rationalism and rationalism-adjacent authors than anywhere else. Of course, optimisation can also be really good! More of a good thing is trivially good! Finding ways to optimise manufacturing has done amazing amounts of good all over the world! Optimisation can be awesome, but can also be incredibly destructive. I don't know any group of thinkers who are more sceptical of blind optimisation, or who spend more time carefully teasing out conditions in which optimisation tends to be helpful and those where it tends to be harmful, or who are more dedicated to promoting care and precision around how we define what we optimise, lest we do huge damage.
This area is probably the main part of my own thinking where I've found rationalism and rationalists the most helpful, and it's all been in the direction of dragging me towards being less fond of just piling everything into a measure and min/maxing it, so I really don't quite know what to do with a criticism of the movement that claims we're the trigger-happy optimisers.
Using precise language doesn't mean ignoring imprecisely-phrased concerns
One section laments the fact that rationalists focus on problems that can be precisely expressed, and neglects those that can't, or perhaps aren't. There's then a list of examples of things that salt-of-the-earth democratic types talk about that us rationalists apparently arrogantly ignore because they don't fit our standards for what counts as a problem that's useful to solve.
The problem is that topics covered by the list of things rationalists supposedly don't think merit discussion could basically be a contents list for SSC/ACX, the single most popular rationalist blog there is. As far as I can see, there's tonnes of rationalist discussion on these issues.
One thing that there isn't really is a corpus of rationalist comment on linguistic confusions like "Is racism X or is it Y?" Generally rationalists are particularly good at noticing when the same term is being used to describe a bunch of different things and where this is causing people to talk past each other and not grasp each other's positions. Again, SSC/ACX is full of this sort of stuff.
But attempting to bring clarity to terminologically-confused pseudo-debates by bringing in a more linguistically precise approach that doesn't get everyone hung up on "Is the issue one thing or another" when in fact both things are issues worth discussing and a piece of the eventual solution isn't ignoring the issue! Indeed, I'd argue that it's embracing and discussing the issue more fully than either "side" of the "is it X or Y" false dichotomy is doing so, because the rhetorical outcome of taking a firm position on that 'question' tends to be the loss of the ability to even discuss the other half of the issue.
These 'intellectual cul-de-sacs' don't half seem awesome
One of the supposed costs of rationalism's alleged obsession with optimisation and ignorance of the real political issues of the day is the wasting of intellectual resources in a number of 'intellectual cul-de-sacs' like AI risk and effective charity. No definition of an intellectual cul-de-sac is ever given, and no argument made that these research areas meet that non-existent definition, beyond the fact that they're niche fields of study whose results don't command wide popular support. Like Quantum Mechanics in 1920, or Climate Science in 1980.
Obviously it's a bit unfair of me, with the benefit of hindsight, to pick two areas that were once extremely niche fields of study whose main results were not endorsed at large, but are now both hugely consequential and widely endorsed. I can do that looking backwards. Obviously not every minority position ends up becoming mainstream, and not every niche field ends up yielding huge amounts of important knowledge. But the authors don't offer even the slightest justification for why the existence of these fields is a bad thing or why they are a 'cul-de-sac', so I don't really have much of a response except to note that there's no real argument here.
Funnily enough, when a group of people comes up with a sort of fundamental philosophy that's a bit different from the norm, that generally leads them to a few downstream beliefs that aren't held by the majority. This is how intellectual progress happens, but also how random spurs into nowhere happen (I guess this is what they mean by cul-de-sacs, but they don't really say). The fact that these downstream beliefs are non-mainstream doesn't help you tell whether the upstream philosophy is right or not, or whether you're looking at the early phase of a paradigm shift or just a mistake. At this point, those two things look the same.
Tying this up
So I think I've mostly covered my main objections.
A focus on rationality and the notion that most people are not particularly rational does not undermine democracy in the slightest; this argument seems to be based on an assumed justification for democracy that is almost precisely backwards.
The authors conflate intelligence with rationality to make some of their points, when in fact the near-orthogonality of these is core rationalist doctrine.
The race argument is silly as usual.
NRx aren't rationalists, at least not in a central enough way for any of the arguments the authors want to make, so they're left with silly guilt-by-association nonsense no different from the NYT debacle.
Rationalism isn't based around min/max optimisation, and indeed is one of the core communities resisting and warning about the growth of such way of thinking and working.
Using precise language and refusing to be drawn into false dichotomies that rest on confusing language doesn't count as ignoring the issues addressed by that confusing language, and in fact plenty of rationalists talk about these things all the time.
And yes, rationalists are often involved in minority concerns like AI risk or EA, but that proves nothing about rationality unless you can actually demonstrate that these things are mistakes/bad, rather than just unpopular, which the authors don't do.
There are some other feints at arguments in the piece, but I don't really know how to respond to them as they're mostly just sort of negative-sounding padding. Apparently Tyler Cowen (who seemed to count as a rationalist himself in another part of the piece) reckons rationalism is a 'religion', but there's no explanation as to why, or whether he might be right, or whether whatever he might be right about might imply anything bad. It's just ominous-sounding. When I said I was an atheist to someone once, they said something like "but isn't that just having a religious faith in Dawkins, so aren't you just another sort of fundamentalist" and that wasn't really an argument either.
There's a bunch of similarly ominous-sounding-but-non-specific stuff about how, actually, the first programmers were women and minorities, but no real explanation of why this has anything to do with rationalism, so I don't know how to address it. There's a lot of imagining that Facebook and whatnot are run by rationalists, when this is obviously untrue, and so a lot of the sins of the tech industry in general end up getting transplanted onto a group of people who, as far as I can see, have been doing the hardest work of anyone sounding the alarms about those exact same issues. I'm sure it would be news to the notable rationalist who wrote 'Against Facebook', that Facebook is run by people just like him and on principles he'd endorse, and therefore he's somewhat responsible for the results of their actions.
Doubtless the authors would say that's not what they meant, but the text is so meandering and full of this sort of not-quite-saying-the-thing-they're-clearly-implying-and-want-us-to-feel-is-true that it's hard to pin down what claims they're actually making. Silicon Valley culture (as one commenter points out, the whole idea of SV as the hub of tech is perhaps a bit outdated by now anyway), NRx, rationalism, LW, SSC/ACX, IQ researchers, Venture Capitalism, and more are just sort of seamlessly blended into one morass that can be pointed at as kinda bad. I was hoping something clearly signposting itself as an attempt to look into 'dark side of rationalism' concerns seriously would do better than this.
If I've misunderstood this or any of this is unfair/wrong, then do point this out. I must say I found it a bit annoying so I am probably not firing on all rationality cylinders myself, though I've left a big gap between reading and then commenting and checking back to make sure I'm remembering right to hopefully keep the kind of 'triggered ingroup/outgroup fighting' that the authors have stated they want to avoid at bay, but I can only do so much.
I think both of these are reasonable responses. I brushed over a lot of nuance in this section because I just didn't want this part of the discussion to dominate. I realise that the raw population differences (in the US) are quite chunky (though d=0.83 is higher than I have heard), but what I glossed over completely is that the only not-obviously-silly investigations of the issue I've seen do admit that clearly there are environmental confounders, and then claim that those confounders only account for some of that difference, and that some genetic component remains. These estimations of genetic component (which, to my vague memory, were in the d=0.1-0.3 range) are what I was calling 'tiny'.
However I now realise I was being a bit inconsistent in also saying that I've never seen rationalists endorse this, because actually I have seen rationalists endorse that tiny effect, but never the absurd-on-its-face idea that the entire observed gap is genetic. So I'm using one version of the hypothesis when refuting one step in the argument, and another version in refuting another step. This was wrong of me.
Perhaps even calling the smaller estimate 'tiny' is still a bit harsh because, as Isusr says, this is still enough for the far ends of the distribution to look very different. So I think the best thing is that I drop this entire size-related part of my argument.
Zack is also right that there is a bit of hard-to-observe going on in my 'no rationalists actually believe this' argument as well. I think I'm on solid ground in saying that I don't think more than a negligible number of rationalists by the hardcore all-the-observed-differences-are-genetic interpretation, but perhaps many are privately convinced there's some genetic component - I wouldn't know. I don't think Kolmogorov Complicity is about that, I think it's about gender issues in CS. But I think the whole point is one can't really tell, so this is also a fair point.
So, on reflection, I drop the 'no rationalists buy these claims' and 'the differences are tiny anyway' parts of my argument as they're somewhat based on different assumptions about what the claims are, and both have their own problems. I will rest my entire 'the race bit is silly' position upon the much more solid grounds of 'but intelligence isn't rationality' and 'even if it were, it wouldn't imply what you imply it implies about who should have political power', which are both problems I have independently with the rest of the piece.
I think a very long, drawn-out argument could still rescue something from my other two points and show that this part of the piece is even weaker than would be implied from just those two errors, but I don't really want to bother because it would be complicated and difficult, the failure of the argument is overdetermined anyway, and talking about it makes me irritable and upset and poses non-negligible risk to the entire community so I just don't see it as worth it in this case.