Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.
Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.
Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.
What Disagreement Signifies
Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.
This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.
I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument.
On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.
And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.
The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."
We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it.
I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.
The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.
As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.
Intelligence and Rationality
Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."
Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.
When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:
A person who is somewhat upper-class will conspicuously signal eir wealth by buying difficult-to-obtain goods. A person who is very upper-class will conspicuously signal that ey feels no need to conspicuously signal eir wealth, by deliberately not buying difficult-to-obtain goods.
A person who is somewhat intelligent will conspicuously signal eir intelligence by holding difficult-to-understand opinions. A person who is very intelligent will conspicuously signal that ey feels no need to conspicuously signal eir intelligence, by deliberately not holding difficult-to-understand opinions.
According to the survey, the average IQ on this site is around 145. People on this site differ from the mainstream in that they are more willing to say death is bad, more willing to say that science, capitalism, and the like are good, and less willing to say that there's some deep philosophical sense in which 1+1 = 3. That suggests people around that level of intelligence have reached the point where they no longer feel it necessary to differentiate themselves from the sort of people who aren't smart enough to understand that there might be side benefits to death.
The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.
Robin Hanson has argued that this leads to biases in academia:
I’ve argued that the main social function of academia is to let students, patrons, readers, etc. affiliate with credentialed-as-impressive minds. If so, academic beliefs are secondary – the important thing is to clearly show respect to those who make impressive displays like theorems or difficult data analysis. And the obvious way for academics to use their beliefs to show respect for impressive folks is to have academic beliefs track the most impressive recent academic work.
Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.
I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.
I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of Necessity. In fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.
Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.
The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.
Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"
The Principle of Charity
I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.
More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.
I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.
As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.
Beware Weirdness for Weirdness' Sake
There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.
Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.
This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.
The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.
That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.
But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.
A More Humble Rationalism?
I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.
First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.
Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.
Well, my point is that as a matter of course, you assume everyone you talk to has mostly true beliefs, and for the most part thinks rationally. We're talking about 'people' with mostly or all false beliefs just to show that we don't have any experience with such creatures.
Bigger picture: the principle of charity, that is the assumption that whoever you are talking to is mostly right and mostly rational, isn't something you ought to hold, it's something you have no choice but to hold. The principle of charity is a precondition on understanding anyone at all, even recognizing that they have a mind.
People will have mostly true beliefs, but they might not have true beliefs in the areas under concern. For obvious reasons, people's irrationality is likely to be disproportionately present in the beliefs with which they disagree with others. So the fact that you need to be charitable in assuming people have mostly true beliefs may not be practically useful--I'm sure a creationist rationally thinks water is wet, but if I'm arguing with him, that subject probably won't come up as much as creationism.