So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."
I disagree with this, and explained why in Probability Space & Aumann Agreement . To quote the relevant parts:
...There are some papers that describe ways to achieve agreement in other ways, such as iterative exchange of posterior probabilities. But in such methods, the agents aren't just moving closer to each other's beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. (The process is similar to the one needed to solve the second riddle on this page.) The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.
Is this realistic for human rationalist wannabes? It seems wildly implausible to me that two humans can communicate all of the information they have that is relevant to the truth of some statement just by repeated
But shouldn't you always update toward the others position?
That's not how Aumann's theorem works. For example, if Alice mildly believe X and Bob strongly believes X, it may be that Alice has weak evidence for X, and Bob has much stronger independent evidence for X. Thus, after exchanging evidence they'll both believe X even more strongly than Bob did initially.
Yup!
One related use case is when everyone in a meeting prefers policy X to policy Y, although each are a little concerned about one possible problem. Going around the room and asking everyone how likely they think X is to succeed produces estimates of 80%, so, having achieved consensus, they adopt X.
But, if people had mentioned their particular reservations, they would have noticed they were all different, and that, once they'd been acknowledged, Y was preferred.
I interpret you as making the following criticisms:
1. People disagree with each other, rather than use Aumann agreement, which proves we don't really believe we're rational
Aside from Wei's comment, I think we also need to keep track of what we're doing.
If we were to choose a specific empirical fact or prediction - like "Russia will invade Ukraine tomorrow" - and everyone on Less Wrong were to go on Prediction Book and make their prediction and we took the average - then I would happily trust that number more than I would trust my own judgment. This is true across a wide variety of different facts.
But this doesn't preclude discussion. Aumann agreement is a way of forcing results if forcing results were our only goal, but we can learn more by trying to disentangle our reasoning processes. Some advantages to talking about things rather than immediately jumping to Aumann:
We can both increase our understanding of the issue.
We may find a subtler position we can both agree on. If I say "California is hot" and you say "California is cold", instead of immediately jumping to "50% probability either way" we can work out which parts of California are
So although I would endorse Aumann-adjusting as a final verdict with many of the people on this site, I think it's great that we have discussions - even heated discussions - first, and I think a lot of those discussions might look from the outside like disrespect and refusal to Aumann adjust.
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.
...The role that IQ is playing here is that of a quasi-objective Outside View measure of a person's ability to be correct and rational. It is, of course, a very very lossy measure that often goes horribly wrong. On the other hand, it makes a useful counterbalance to our subjective measure of "I feel I'm
I agree that what look like disrespectful discussions at first could eventually lead to Aumann agreement, but my impression is that there are a lot of persistent disagreements within the online rationalist community. Eliezer's disagreements with Robin Hanson are well-known. My impression is that even people within MIRI have persistent disagreements with each other, though not as big as the Eliezer-Robin disagreements. I don't know for sure Alicorn and I would continue to disagree about the ethics of white lies if we talked it out thoroughly, but it wouldn't remotely surprise me. Et cetera.
Are ethics supposed to be Aumann-agreeable? I'm not at all sure the original proof extends that far. If it doesn't, that would cover your disagreement with Alicorn as well as a very large number of other disagreements here.
I don't think it would cover Eliezer vs. Robin, but I'm uncertain how "real" that disagreement is. If you forced both of them to come up with probability estimates for an em scenario vs. a foom scenario, then showed them both each other's estimates and put a gun to their heads and asked them whether they wanted to Aumann-update or not, I'm not sure they wouldn't ag...
Your heuristics are, in my opinion, too conservative or not strong enough.
Track record of saying reasonable things once again seems to put the burden of decision on your subjective feelings and so rule out paying attention to people you disagree with. If you're a creationist, you can rule out paying attention to Richard Dawkins, because if he's wrong about God existing, about the age of the Earth, and about homosexuality being okay, how can you ever expect him to be right about evolution? If you're anti-transhumanism, you can rule out cryonicists because they tend to say lots of other unreasonable things like that computers will be smarter than humans, or that there can be "intelligence explosions", or that you can upload a human brain.
Status within mainstream academia is a really good heuristic, and this is part of what I mean when I say I use education as a heuristic. Certainly to a first approximation, before investigating a field, you should just automatically believe everything the mainstream academics believe. But then we expect mainstream academia to be wrong in a lot of cases - you bring up the case of mainstream academic philosophy, and although I'm less certain ...
Three somewhat disconnected responses —
For a moral realist, moral disagreements are factual disagreements.
I'm not sure that humans can actually have radically different terminal values from one another; but then, I'm also not sure that humans have terminal values.
It seems to me that "deontologist" and "consequentialist" refer to humans who happen to have noticed different sorts of patterns in their own moral responses — not groups of humans that have fundamentally different values written down in their source code somewhere. ("Moral responses" are things like approving, disapproving, praising, punishing, feeling pride or guilt, and so on. They are adaptations being executed, not optimized reflections of fundamental values.)
A Christian proverb says: “The Church is not a country club for saints, but a hospital for sinners”. Likewise, the rationalist community is not an ivory tower for people with no biases or strong emotional reactions, it’s a dojo for people learning to resist them.
People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.
Identifying as a "rationalist" is encouraged by the welcome post.
We'd love to know who you are, what you're doing, what you value, how you came to identify as a rationalist
Edited the most recent welcome post and the post of mine that it linked to.
Does anyone have a 1-syllable synonym for 'aspiring'? It seems like we need to impose better discipline on this for official posts.
People on LW have started calling themselves "rationalists". This was really quite alarming the first time I saw it. People used to use the words "aspiring rationalist" to describe themselves, with the implication that e didn't consider ourselves close to rational yet.
My initial reaction to this was warm fuzzy feelings, but I don't think it's correct, any more than calling yourself a theist indicates believing you are God. "Rationalist" means believing in rationality (in the sense of being pro-rationality), not believing yourself to be perfectly rational. That's the sense of rationalist that goes back at least as far as Bertrand Russell. In the first paragraph of his "Why I Am A Rationalist", for example, Russell identifies as a rationalist but also says, "We are not yet, and I suppose men and women never will be, completely rational."
This also seems like it would be a futile linguistic fight. A better solution might be to consciously avoid using "rationalist" when talking about Aumann's agreement theorem—use "ideal rationalists" or "perfect rationalist". I also tend to use phrases like "members of the online rationalist community," but that's more to indicate I'm not talking about Russell or Dawkins (much less Descartes).
I've recently had to go on (for a few months) some medication which had the side effect of significant cognitive impairment. Let's hand-wavingly equate this side effect to shaving thirty points off my IQ. That's what it felt like from the inside.
While on the medication, I constantly felt the need to idiot-proof my own life, to protect myself from the mistakes that my future self would certainly make. My ability to just trust myself to make good decisions in the future was removed.
This had far more ramifications than I can go into in a brief comment, but I can generalize by saying that I was forced to plan more carefully, to slow down, to double-check my work. Unable to think as deeply into problems in a freewheeling cognitive fashion, I was forced to break them down carefully on paper and understand that anything I didn't write down would be forgotten.
Basically what I'm trying to say is that being stupider probably forced me to be more rational.
When I went off the medication, I felt my old self waking up again, the size of concepts I could manipulate growing until I could once again comprehend and work on programs I had written before starting the drugs in the first place. I...
One thing I hear you saying here is, "We shouldn't build social institutions and norms on the assumption that members of our in-group are unusually rational." This seems right, and obviously so. We should expect people here to be humans and to have the usual human needs for community, assurance, social pleasantries, and so on; as well as the usual human flaws of defensiveness, in-group biases, self-serving biases, motivated skepticism, and so on.
Putting on the "defensive LW phyggist" hat: Eliezer pointed out a long time ago that knowing about biases can hurt people, and the "clever arguer" is a negative trope throughout that swath of the sequences. The concerns you're raising aren't really news here ...
Taking the hat off again: ... but it's a good idea to remind people of them, anyway!
Regarding jargon: I don't think the "jargon as membership signaling" approach can be taken very far. Sure, signaling is one factor, but there are others, such as —
I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses.
Some people are just bad at explaining their ideas correctly (too hasty, didn't reread themselves, not a high enough verbal SAT, foreign mother tongue, inferential distance, etc.), others are just bad at reading and understanding other's ideas correctly (too hasty, didn't read the whole argument before replying, glossed over that one word which changed the whole meaning of a sentence, etc.).
I've seen many poorly explained arguments which I could understand as true or at least pointing in interesting directions, which were summarily ignored or shot down by uncharitable readers.
Ok, there's no way to say this without sounding like I'm signalling something, but here goes.
As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.
"If you can't say something you are very confident is actually smart, don't say anything at all." This is, in fact, why I don't say very much, or say it in a lot of detail, much of the time. I have all kinds of thoughts about all kinds of things, but I've had to retract sincerely-held beliefs so many times I just no longer bother embarrassing myself by opening my big dumb mouth.
...Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the
there seem to be a lot of people in the LessWrong community who imagine themselves to be (...) paragons of rationality who other people should accept as such.
Uhm. My first reaction is to ask "who specifically?", because I don't have this impression. (At least I think most people here aren't like this, and if a few happen to be, I probably did not notice the relevant comments.) On the other hand, if I imagine myself at your place, even if I had specific people in mind, I probably wouldn't want to name them, to avoid making it a personal accusation instead of observation of trends. Now I don't know what do to.
Perhaps could someone else give me a few examples of comments (preferably by different people) where LW members imagine themselves paragons of rationality and ask other people to accept them as such? (If I happen to be such an example myself, that information would be even more valuable to me. Feel free to send me a private message if you hesitate to write it publicly, but I don't mind if you do. Crocker's rules, Litany of Tarski, etc.)
...I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about
I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.
Getting principle of charity right can be hard in general. A common problem is when something can be interpreted as stupid in two different ways; namely, it has an interpretation which is obviously false, and another interpretation which is vacuous or trivial. (E.g.: "People are entirely selfish.") In cases like this, where it's not clear what the charitable reading is, it may just be best to point out what's going on. ("I'm not certain what you mean by that. I see two ways of interpreting your statement, but one is obviously false, and the other is vacuous.") Assuming they don't mean the wrong thing is not the right ans...
As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary.
This is one of the loonier[1] ideas to be found on Overcoming Bias (and that's quite saying something). Exercise for the reader: test this idea that sharing opinions screens off the usefulness of sharing evidence with the following real-world scenario. I have participated in this scenario several times and know what the correct answer is.
You are on the programme committee of a forthcoming conference, which is meeting to decide which of the submitted papers to accept. Each paper has been refereed by several people, each of whom has given a summary opinion (definite accept, weak accept, weak reject, or definite reject) and supporting evidence for the opinion.
To transact business most efficiently, some papers are judged solely on the summary opinions. Every paper rated a definite accept by every referee for that paper is accepted without further discussion, because if three independent experts all think ...
Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"
I'll go ahead and disagree with this. Sure, there's a lot of smart people who aren't rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I've met are very smart. So it seems really high intellig...
Okay, you're doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:
LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn't rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That's a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don't see why that's relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don't care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were a...
Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy.
Not if you want to be accepted by that group. Being bad at signaling games can be crippling - as much as intellectual signaling poisons discourse, it's also the glue that holds a community together enough to make discourse possible.
Example: how likely you are to get away with making a post or comment on signaling games is primarily dependent on how good you are at signaling games, especially how good you are at the "make the signal appear to plausibly be something other than a signal" part of signaling games.
But a humble attempt at rationalism is so much less funny...
More seriously, I could hardly agree more with the statement that intelligence has remarkably little to do with susceptibility to irrational ideas. And as much as I occasionally berate others for falling into absurd patterns, I realize that it pretty much has to be true that somewhere in my head is something just as utterly inane that I will likely never be able to see, and it scares me. As such sometimes I think dissensus is not only good, but necessary.
I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid.
As far as I understand, the Principle of Charity is defined differently; it states that you should interpret other people's arguments on the assumption that these people are arguing in good faith. That is to say, you should assume that your interlocutor honestly believes in everything he's saying, and that he has no ulterior motive beyou...
Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.
I agree with your assessment. My suspicion is that this is due to...
Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it."
Not necessarily. It might be saying "You have an interesting viewpoint; let me see what basis it has, so that I may properly integrate this evidence into my worldview and correctly update my viewpoint"
Particularly problematic is this self congratulatory process:
some simple mistake leads to non mainstream conclusion -> the world is insane and I'm so much more rational than everyone else -> endorphins released -> circuitry involved in mistake-making gets reinforced.
For example: the IQ is the best predictor of job performance, right? So the world is insane that it mostly hires based on experience, test questions, and so on (depending on the field) rather than IQ, right? Cue the endorphins and reinforcement of careless thinking.
If you're not after...
The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.
Well perhaps you should adopt a charitable interpretation of the principle of charity :) It occurs to me that the phrase itself might not be ideal since "charity" implies that you are giving something which the recipient does...
Skimming the "disagreement" tag in Robin Hanson's archives, I found I few posts that I think are particularly relevant to this discussion:
I wonder how much people's interactions with other aspiring rationalists in real life has any effect on this problem. Specifically, I think people who have become/are used to being significantly better at forming true beliefs than everyone around them will tend to discount other people's opinions more.
Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality. Even Nietzsche acknowledged that it was the rationality of Christianity that led to its intellectual demise (as he saw it), as people relentlessly applied rationality tools to Christianity.
My own model of how rational we are is more in line with Ed Seykota's (http://www.seykota.com/tribe/TT_Process/index.htm) than the typical geek model that we are basically rational with a few "biases" added on...
Everyone (and every group) thinks they are rational. This is not a distinctive feature of LW. Christianity and Buddhism make a lot of their rationality.
To the contrary, lots of groups make a big point of being anti-rational. Many groups (religious, new-age, political, etc.) align themselves in anti-scientific or anti-evidential ways. Most Christians, to make an example, assign supreme importance to (blind) faith that triumphs over evidence.
But more generally, humans are a-rational by default. Few individuals or groups are willing to question their most cherished beliefs, to explicitly provide reasons for beliefs, or to update on new evidence. Epistemic rationality is not the human default and needs to be deliberately researched, taught and trained.
And people, in general, don't think of themselves as being rational because they don't have a well-defined, salient concept of rationality. They think of themselves as being right.
sake-handling -> snake-handling
Anecdotally, I feel like I treat anyone on LW as someone to take much more seriously because f that, it's just not different enough for any of the things-perfect-rationalists-should-do to start to apply.
Excellent post. I don't have anything useful to add at the moment, but I am wondering if the second-to-last paragraph:
First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that
is just missing a period at the end, or has a fragmented sentence.
By the way, I agree with you that there is a problem with rationalists who are a lot less rational than they realize.
What would be nice is if there were a test for rationality just like one can test for intelligence. It seems that it would be hard to make progress without such a test.
Unfortunately there would seem to be a lot of opportunity for a smart but irrational person to cheat on such a test without even realizing it. For example, if it were announced that atheism is a sign of rationality, our hypothetical smart but irrational person would proudl...
"rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme
Perhaps a better branding would be "effective decision making", or "effective thought"?
...As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rational
Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."
I couldn't agree more to that - to a first approximation.
Now of course, the first problem is with people who think a person is either rational in general or not, right in general, or not. Being right or rational is conflated ...
For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.
That... isn't jargon? There are probably plenty of actual examples you could have used here, but that isn't one.
Edit: OK, you did give an actual example below that ("blue-green politics"). Nonetheless, "mental model" is not jargon. It wasn't coined here, it doesn't have some specialized meaning here that differs from its use outside, it's entirely compositional and thus transparent -- nobody has to explain to you what it means -- and at least in my own experience it just isn't a rare phrase in the first place.
Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments.
When? The closest case I can recall came from someone defending religion or theology - which brought roughly the response you'd expect - and even that was a weaker claim.
If you mean people saying you should try to slightly adjust your probabilities upon meeting intelligent and somewhat rational disagreement, this seems clearly true. Worst case scenario, you waste some time putting a refutation together (coughWLC).
I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.
So, respond with something like "I don't think sanity is a single personal variable which extends to all held beliefs." It conveys the same information- "I don't trust conclusions solely because you reached them"- but it doesn't convey the implication that this is a personal failing...
I am surprised by the fact that this post has so little karma. Since one of the...let's call them "tenets" of the rationalism community is the drive to improve one's own self, I would have imagined that this kind of criticism would have been welcomed.
Can anyone explain this to me, please? :-/
I'm not sure what the number you were seeing when you wrote this was, and for my own part I didn't upvote it because I found it lacked enough focus to retain my interest, but now I'm curious: how much karma would you expect a welcomed post to have received between the "08:52AM" and "01:29:45PM" timestamps?
Just curious, how does Plantinga's argument prove that pigs fly? I only know how it proves that the perfect cheeseburger exists...
If you persistently misinterpret people as saying stupid things, then your evidence that people say a lot of stupid things is false evidence, You're in a sort of echo chamber. The PoC is correct because an actually stupid comment is a comment that can't be interpreted as smart no matter how hard you try.
The fact that some people misapply the POC is not the PoCs fault..
The PoC cannot in any way a guideline about what is worth spending time on. It is about efficient communication in the sense of interpreting people correctly only. If you haven't got time to...
Quite a few people complain about the atheist/skeptic/rationalist communities being self-congratulatory. I used to dismiss this as a sign of people's unwillingness to admit that rejecting religion, or astrology, or whatever, was any more rational than accepting those things. Lately, though, I've started to worry.
Frankly, there seem to be a lot of people in the LessWrong community who imagine themselves to be, not just more rational than average, but paragons of rationality who other people should accept as such. I've encountered people talking as if it's ridiculous to suggest they might sometimes respond badly to being told the truth about certain subjects. I've encountered people asserting the rational superiority of themselves and others in the community for flimsy reasons, or no reason at all.
Yet the readiness of members of the LessWrong community to disagree with and criticize each other suggests we don't actually think all that highly of each other's rationality. The fact that members of the LessWrong community tend to be smart is no guarantee that they will be rational. And we have much reason to fear "rationality" degenerating into signaling games.
What Disagreement Signifies
Let's start by talking about disagreement. There's been a lot of discussion of disagreement on LessWrong, and in particular of Aumann's agreement theorem, often glossed as something like "two rationalists can't agree to disagree." (Or perhaps that we can't foresee to disagree.) Discussion of disagreement, however, tends to focus on what to do about it. I'd rather take a step back, and look at what disagreement tells us about ourselves: namely, that we don't think all that highly of each other's rationality.
This, for me, is the take-away from Tyler Cowen and Robin Hanson's paper Are Disagreements Honest? In the paper, Cowen and Hanson define honest disagreement as meaning that "meaning that the disputants respect each other’s relevant abilities, and consider each person’s stated opinion to be his best estimate of the truth, given his information and effort," and they argue disagreements aren't honest in this sense.
I don't find this conclusion surprising. In fact, I suspect that while people sometimes do mean it when they talk about respectful disagreement, often they realize this is a polite fiction (which isn't necessarily a bad thing). Deep down, they know that disagreement is disrespect, at least in the sense of not thinking that highly of the other person's rationality. That people know this is shown in the fact that they don't like being told they're wrong—the reason why Dale Carnegie says you can't win an argument.
On LessWrong, people are quick to criticize each others' views, so much so that I've heard people cite this as a reason to be reluctant to post/comment (again showing they know intuitively that disagreement is disrespect). Furthermore when people in LessWrong criticize others' views, they very often don't seem to expect to quickly reach agreement. Even people Yvain would classify as "experienced rationalists" sometimes knowingly have persistent disagreements. This suggests that LessWrongers almost never consider each other to be perfect rationalists.
And I actually think this is a sensible stance. For one thing, even if you met a perfect rationalist, it could be hard to figure out that they are one. Furthermore, the problem of knowing what to do about disagreement is made harder when you're faced with other people having persistent disagreements: if you find yourself agreeing with Alice, you'll have to think Bob is being irrational, and vice versa. If you rate them equally rational and adopt an intermediate view, you'll have to think they're both being a bit irrational for not doing likewise.
The situation is similar to Moore's paradox in philosophy—the impossibility of asserting "it's raining, but I don't believe it's raining." Or, as you might say, "Of course I think my opinions are right and other people's are wrong. Otherwise I'd change my mind." Similarly, when we think about disagreement, it seems like we're forced to say, "Of course I think my opinions are rational and other people's are irrational. Otherwise I'd change my mind."
We can find some room for humility in an analog of the preface paradox, the fact that the author of a book can say things like "any errors that remain are mine." We can say this because we might think each individual claim in the book is highly probable, while recognize that all the little uncertainties add up to it being likely there are still errors. Similarly, we can think each of our beliefs are individually rational, while recognizing we still probably have some irrational beliefs—we just don't know which ones And just because respectful disagreement is a polite fiction doesn't mean we should abandon it.
I don't have a clear sense of how controversial the above will be. Maybe we all already recognize that we don't respect each other's opinions 'round these parts. But I think some features of discussion at LessWrong look odd in light of the above points about disagreement—including some of the things people say about disagreement.
The wiki, for example, says that "Outside of well-functioning prediction markets, Aumann agreement can probably only be approximated by careful deliberative discourse. Thus, fostering effective deliberation should be seen as a key goal of Less Wrong." The point of Aumann's agreement theorem, though, is precisely that ideal rationalists shouldn't need to engage in deliberative discourse, as usually conceived, in order to reach agreement.
As Cowen and Hanson put it, "Merely knowing someone else’s opinion provides a powerful summary of everything that person knows, powerful enough to eliminate any differences of opinion due to differing information." So sharing evidence the normal way shouldn't be necessary. Asking someone "what's the evidence for that?" implicitly says, "I don't trust your rationality enough to take your word for it." But when dealing with real people who may or may not have a rational basis for their beliefs, that's almost always the right stance to take.
Intelligence and Rationality
Intelligence does not equal rationality. Need I say more? Not long ago, I wouldn't have thought so. I would have thought it was a fundamental premise behind LessWrong, indeed behind old-school scientific skepticism. As Michael Shermer once said, "Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons."
Yet I've heard people suggest that you must never be dismissive of things said by smart people, or that the purportedly high IQ of the LessWrong community means people here don't make bad arguments. When I hear that, I think "whaaat? People on LessWrong make bad arguments all the time!" When this happens, I generally limit myself to trying to point out the flaw in the argument and/or downvoting, and resist the urge to shout "YOUR ARGUMENTS ARE BAD AND YOU SHOULD FEEL BAD." I just think it.
When I reach for an explanation of why terrible arguments from smart people shouldn't surprise anyone, I go to Yvain's Intellectual Hipsters and Meta-Contarianism, one of my favorite LessWrong posts of all time. While Yvain notes that meta-contrarianism often isn't a good thing, though, on re-reading it I noticed what seems like an important oversight:
The pattern of countersignaling Yvain describes here is real. But it's important not to forget that sometimes, the super-wealthy signal their wealth by buying things even the moderately wealthy can't afford. And sometimes, the very intelligent signal their intelligence by holding opinions even the moderately intelligent have trouble understanding. You also get hybrid status moves: designer versions of normally low-class clothes, complicated justifications for opinions normally found among the uneducated.
Robin Hanson has argued that this leads to biases in academia:
Robin's post focuses on economics, but I suspect the problem is even worse in my home field of philosophy. As I've written before, the problem is that in philosophy, philosophers never agree on whether a philosopher has solved a problem. Therefore, there can be no rewards for being right, only rewards for showing off your impressive intellect. This often means finding clever ways to be wrong.
I need to emphasize that I really do think philosophers are showing off real intelligence, not merely showing off faux-cleverness. GRE scores suggest philosophers are among the smartest academics, and their performance is arguably made more impressive by the fact that GRE quant scores are bimodally distributed based on whether your major required you to spend four years practicing your high school math, with philosophy being one of the majors that doesn't grant that advantage. Based on this, if you think it's wrong to dismiss the views of high-IQ people, you shouldn't be dismissive of mainstream philosophy. But in fact I think LessWrong's oft-noticed dismissiveness of mainstream philosophy is largely justified.
I've found philosophy of religion in particular to be a goldmine of terrible arguments made by smart people. Consider Alvin Plantinga's modal ontological argument. The argument is sufficiently difficult to understand that I won't try to explain it here. If you want to understand it, I'm not sure what to tell you except to maybe read Plantinga's book The Nature of Necessity. In fact, I predict at least one LessWronger will comment on this thread with an incorrect explanation or criticism of the argument. Which is not to say they wouldn't be smart enough to understand it, just that it might take them a few iterations of getting it wrong to finally get it right. And coming up with an argument like that is no mean feat—I'd guess Plantinga's IQ is just as high as the average LessWronger's.
Once you understand the modal ontological argument, though, it quickly becomes obvious that Plantinga's logic works just as well to "prove" that it's a necessary truth that pigs fly. Or that Plantinga's god does not exist. Or even as a general purpose "proof" of any purported mathematical truth you please. The main point is that Plantinga's argument is not stupid in the sense of being something you'd only come up with if you had a low IQ—the opposite is true. But Plantinga's argument is stupid in the sense of being something you'd only come up with it while under the influence of some serious motivated reasoning.
The modal ontological argument is admittedly an extreme case. Rarely is the chasm between the difficulty of the concepts underlying an argument, and the argument's actual merits, so vast. Still, beware the temptation to affiliate with smart people by taking everything they say seriously.
Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, "They aren't the same thing, but the correlation is still very strong?"
The Principle of Charity
I've made no secret of the fact that I'm not a big fan of the principle of charity—often defined as the rule that you should interpret other people's arguments on the assumption that they are not saying anything stupid. The problem with this is that other people are often saying something stupid. Because of that, I think charitable is over-rated compared to fair and accurate reading. When someone says something stupid, you don't have to pretend otherwise, but it's really important not to attribute to people stupid things they never said.
More frustrating than this simple disagreement over charity, though, is when people who invoke the principle of charity do so selectively. They apply it to people who's views they're at least somewhat sympathetic to, but when they find someone they want to attack, they have trouble meeting basic standards of fairness. And in the most frustrating cases, this gets explicit justification: "we need to read these people charitably, because they are obviously very intelligent and rational." I once had a member of the LessWrong community actually tell me, "You need to interpret me more charitably, because you know I'm sane." "Actually, buddy, I don't know that," I wanted to reply—but didn't, because that would've been rude.
I can see benefits to the principle of charity. It helps avoid flame wars, and from a Machiavellian point of view it's nice to close off the "what I actually meant was..." responses. Whatever its merits, though, they can't depend on the actual intelligence and rationality of the person making an argument. Not only is intelligence no guarantee against making bad arguments, the whole reason we demand other people tell us their reasons for their opinions in the first place is we fear their reasons might be bad ones.
As I've already explained, there's a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it's stupid, otherwise I wouldn't say it. But at least I'm not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don't demand that they accept "Chris isn't saying anything stupid" as an axiom in order to engage with me.
Beware Weirdness for Weirdness' Sake
There's a theory in the psychology and sociology of religion that the purpose of seemingly foolish rituals like circumcision and snake-handling is to provide a costly and therefore hard-to-fake signal of group commitment. I think I've heard it suggested—though I can't find by who—that crazy religious doctrines could serve a similar purpose. It's easy to say you believe in a god, but being willing to risk ridicule by saying you believe in one god who is three persons, who are all the same god, yet not identical to each other, and you can't explain how that is but it's a mystery you accept on faith... now that takes dedication.
Once you notice the general "signal group commitment in costly ways" strategy, it seems to crop up everywhere. Subcultures often seem to go out of their way to be weird, to do things that will shock people outside the subculture, ranging from tattoos and weird clothing to coming up with reasons why things regarded as normal and innocuous in the broader culture are actually evil. Even something as simple as a large body of jargon and in-jokes can do the trick: if someone takes the time to learn all the jargon and in-jokes, you know they're committed.
This tendency is probably harmless when done with humor and self-awareness, but it's more worrisome when a group becomes convinced its little bits of weirdness for weirdness' sake are a sign of its superiority to other groups. And it's worth being aware of, because it makes sense of signaling moves that aren't straightforwardly plays for higher status.
The LessWrong community has amassed a truly impressive store of jargon and in-jokes over the years, and some of it's quite useful (I reiterate my love for the term "meta-contrarian"). But as with all jargon, LessWrongian jargon is often just a silly way of saying things you could have said without it. For example, people say "I have a poor mental model of..." when they could have just said they don't understand it very well.
That bit of LessWrong jargon is merely silly. Worse, I think, is the jargon around politics. Recently, a friend gave "they avoid blue-green politics" as a reason LessWrongians are more rational than other people. It took a day before it clicked that "blue-green politics" here basically just meant "partisanship." But complaining about partisanship is old hat—literally. America's founders were fretting about it back in the 18th century. Nowadays, such worries are something you expect to hear from boringly middle-brow columnists at major newspapers, not edgy contrarians.
But "blue-green politics," "politics is the mind-killer"... never mind how much content they add, the point is they're obscure enough to work as an excuse to feel superior to anyone whose political views are too mainstream. Outsiders will probably think you're weird, invoking obscure jargon to quickly dismiss ideas that seem plausible to them, but on the upside you'll get to bond with members of your in-group over your feelings of superiority.
A More Humble Rationalism?
I feel like I should wrap up with some advice. Unfortunately, this post was motivated by problems I'd seen, not my having thought of brilliant solutions to them. So I'll limit myself to some fairly boring, non-brilliant advice.
First, yes, some claims are more rational than others. Some people even do better at rationality overall than others. But the idea of a real person being anything close to an ideal rationalist is an extraordinary claim, and should be met with appropriate skepticism and demands for evidence. Don't forget that.
Also, beware signaling games. A good dose of Hansonian cynicism, applied to your own in-group, is healthy. Somewhat relatedly, I've begun to wonder if "rationalism" is really good branding for a movement. Rationality is systematized winning, sure, but the "rationality" branding isn't as good for keeping that front and center, especially compared to, say the effective altruism meme. It's just a little too easy to forget where "rationality" is supposed to connect with the real world, increasing the temptation for "rationality" to spiral off into signaling games.