I'm going to make a controversial suggestion: one useful target of tolerance might be religion.
I think we pretty much all understand that the supernatural is an open and shut case. Because of this, religion is a useful example of people getting things screamingly, disastrously wrong. And so we tend to use that as a pointer to more subtle ways of being wrong, which we can learn to avoid. This is good.
However, when we speak too frequently, and with too much naked disdain, of religion, these habits begin to have unintended negative effects.
It would be useful to have resources on general rationality to which to point our theist friends, in order to raise their overall level of sanity to the point where religion can fall away on its own. This is not going to work if these resources are blasting religion right from the get-go. Our friends are going to feel attacked, quickly close their browsers, and probably not be too well-disposed towards us the next time we speak (this may not be an entirely hypothetical example).
I'm not talking about respect. That would be far too much to ask. If we were to speak of religion as though it could genuinely be true, we would be spectacular liars. Still, not bringing up the topic when it's not necessary, using another example if there happens to be one available, would, I think, significantly increase the potential audience for our writing.
The problem with tolerating religion is that, as Dawkins pointed out, it has received too much tolerance already. One reason religion is so widespread and obnoxious is that it has been so off limits to criticism for so long.
A good solution to this is to have some diversity of rhetoric. Some people can be blunt, others openly contemptuous, and others more friendly and overtly tolerant. There's room enough for all of these.
The less tolerant people destroy the special immunity to criticism that religion has long enjoyed, and get to be seen as the "extremists". Meanwhile they make the sweetness-and-light folks look more moderate by comparison, which is a useful thing. A lot of people reflexively reject extremism, which they define as simply the most extreme views that they're hearing expressed on a contentious issue. Make the extremists more extreme, and more moderate versions of their viewpoint become more socially acceptable.
Someone has to play the villains in this story.
I'm very much in favor of what you wrote there. I've been thinking to start a separate thread about this some time. Though feel free to beat me to it, I won't be ready to do so very soon anyway. But here's a stab at what I'm thinking.
This is from the welcome thread:
A note for theists: you will find LW overtly atheist. We are happy to have you participating, but please be aware that other commenters are likely to treat religion as an open-and-shut case. This isn't groupthink; we really, truly have given full consideration to theistic claims and found them to be false.
This is fair. I could, in principle, sit down and discuss rationality with a group having such a disclaimer, except in favor of religion, assuming they got promoted to my attention for some unrelated good reason (like I've been linked to an article and read that one and two more and I found them all impressive). Not going to happen in practice, probably, but you get my drift.
Except that's not the vibe of what Less Wrong is actually like, IMO, that we're "happy to have" these people. Atheism strikes me as a belief that's necessary for acceptance to the tribe. This is not a Good Thing, for many reasons, the ...
I'm going to make a controversial suggestion: one useful target of tolerance might be religion.
I'll try to tolerate your tolerance.
(I blog using any examples that come to hand, but when I canonicalize I try to remove explicit mentions of religion where possible. Bear in mind that intelligent religious people with Escher-minds will see the implications early on, though.)
I think you point up the problem with your own suggestion - we have to have examples of rationality failure to discuss, and if we choose an example on which we agree less (eg something to do with AGW) then we will end up discussing the example instead of what it is intended to illustrate. We keep coming back to religion not just because practically every failure of rationality there is has a religious example, but because it's something we agree on.
It's not so much that I'm trying to hide my atheism, or that I worry about offending theists - then I wouldn't speak frankly online. The smart ones are going to notice, if you talk about fake explanations, that this applies to God; and they're going to know that you know it, and that you're an atheist. Admittedly, they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.
And I don't plan far enough ahead for a day when religion is dead, because most of my utility-leverage comes before then.
But rationality is itself, not atheism or a-anything; and therefore, for aesthetic reasons, when I canonicalize (compile books or similar long works), I plan to try much harder to present what rationality is, and not let it be a reaction to or a refutation of anything.
Writing that way takes more effort, though.
they may be much less personally offended if you never spell out the application - not sure why, but that probably is how it works.
Once you connect the dots and make the application explicit, they feel honor-bound to take offense and to defend their theism, regardless of whether they personally want to take offense or not. In their mind, making the application explicit shifts the discussion from being about ideas to being about their core beliefs and thus about their person.
"of someone who I wrote off as hopeless within 30 seconds of being introduced to them."
Few college professors would do this because many students are unimpressive when you first talk with them but than do brilliantly on exams and papers.
I've known people be hopeless for months, then suddenly for no observable reason begin acting brilliantly, another reminder that small data sets aren't sufficient to predict a system as complex as human behaviour.
I usually have something nice to say about most things, even the ideas of some pretty crazy people. Perhaps less so online, but more in person. In my case the reason is not tolerance, but rather a habit that I have when I analyse things: when I see something I really like I ask myself, "Ok, but what's wrong with this?" I mentally try to take an opposing position. Many self described "rationalists" do this, habitually. The more difficult one is the reverse: when I see something I really don't like, but where the person (or better, a whole group) is clearly serious about it and has spent some time on it, I force myself to again flip sides and try to argue for their ideas. Over the years I suspect I've learnt more from the latter than the former. Externally, I might just sound like I'm being very tolerant.
Note that tolerance is part of a general conversion strategy. Nitpicking everyone who disagrees with you in the slightest isn't likely to make friends, but it is likely to make your opponents think you are an arrogant jerk. Sometimes you just have to keep it too yourself.
Punishing for non-punishment is an essential dynamic for preserving some social hierarchies, at least in schoolyards and in Nazi Germany.
Abby was just telling me this afternoon that psychologists today believe that when kids are picked on in school, it's their own fault - either they are too shy, or they are bullies. (There is a belief that bullies are picked on in school, something I never saw evidence of in my school days except when it was me doing the picking-on.)
My theory is that the purpose of picking on kids in school is not to have effects on the kid picked on, but to warn everyone else that they will be picked on if they fail to conform. A kid is thus likely to be picked on if they don't respond to social pressures. Thus the advice that every parent gives their children, "Just ignore them if they pick on you," is the worst possible advice. Fight back, or conform; failing to respond requires them to make an example of you and does not impose any cost on them for doing so.
Wolves have a very strict social hierarchy, but I've never noticed evidence of punishment for a failure to punish. So this behavior isn't necessary.
We can and should reach whatever conclusions about people we wish. But we should be very slow to fail to observe and accept new evidence about them.
Excluding people from discussion may screen out their nonsense (or at least the things you thought were nonsense), but it also prevents you from discovering that you made a hasty decision. Once you've started ignoring someone, you can no longer observe what they say - and possibly find that they're smarter than you thought they were.
It's worth acquiring new data even from those you've discarded, at least once in a while.
I think there is an important distinction between cheap and expensive tolerance. If I am sitting on a plane and don't have a good book and am talking to my seatmate, and they seem stupid and irrational, being tolerant is likely to lead to an enjoyable conversation. I may even learn something.
But if I am deciding what authors to read, whose arguments to think about more seriously, etc., then it seems irrational to not judge and prioritize with my limited time.
And this relates to indirect tolerance - someone who doesn't judge and prioritize good arguments ...
The advice isn't about your attitude towards your seatmate's stupidity and irrationality. It's directed at your rationalist buddy sitting on your other side -- she's being advised not to be annoyed at you if you choose to be tolerant.
Eliezer is correct, but this post should be followed up by one about the many places where failing to punish non-punishers, in other words, tolerating free-riders, has negative consequences.
If you transgress, I might have a problem with you. If you actively shield a transgressor, I might have a problem with you. If you just don't punish a transgressor, the circumstances where I might have a problem are pretty rare I think!
The application of this principle to [outrage over the comments and commenters which a blogger declines to delete/ban] is left as an exercise for the reader.
My attitude toward Ben's tolerance depends on the context. When he does it as a person, I appreciate it. When he does it as chair of AGI, I don't. There were some very good presentations this year, but there were also some very bad time-wasters.
But probably I should blame the reviewers instead.
Damn M-nt-f-x! Damn every one that won't damn M-nt-f-x!! Damn every one that won't put lights in his windows and sit up all night damning M-nt-f-x!!!
Since I saw this comment before the post it goes with, I thought it was some sort of rant about people not using Emacs for their comments. ;-)
Great post. I think I'd already sort of started trying to do this, although I couldn't have put it as well. Now what I want to know is how much to tolerate people who are less tolerant than me. I'm not quite sure what to do when I meet someone who is infuriated by patterns of thinking that I consider only trivially erroneous or understandable under certain circumstances.
I am going to disagree with the idea that 'being "intolerant of intolerance"' is inherently inconsistent. The problem is with the word tolerance, which contains multiple meanings. I think that it is morally wrong to discriminate against people for things that they can't change. Believing that someone of a different race can't possibly be intelligent is a moral wrong. Furthermore, it is so indicative of stupidity that I do not wish to associate with such a person, if they are in a culture where theirs is the minority view.To put it another way, to...
There's a question of whether there's an important difference in kind between sorts of tolerance. Here's an analogy which might or might not work: assume that, in general, a driver of a vehicle drives as fast as they think it is safe for cars to be driven in general. Only impatience would cause them to not tolerate people who drive slower than they; a safety concern could cause them to be upset by people who drive faster, since they consider that speed unsafe. Say you have two people who each drive at 50 mph. One of them tolerates only slower drivers b...
I don't get it. You want us to work with those who refuse to 'punish' foolishness but who aren't fools themselves to, presumably, fight against foolishness. All right, I can see the sense in that.
Why does it follow that we should censor ourselves when dealing with these non-foolish foolishness enablers? Why can't we work with them and show our disapproval of their enabling?
a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.
Could I get a reference for this? I wanted to refer someone else to it, and my Google searches failed me.
In a situation where someone who seems to be very like-minded is more tolerant to another person X than I would be, I would be very interested in why, if I don't already know. Perhaps my friend has reasons that I would agree with, if I only knew them. (Some pragmatic reasons come to mind.)
If I still disagree with my friend, even after knowing his reasons, I would then express the disagreement and see if I couldn't convert my friend on the basis of our common views. If I fail to convert him, it is because our views differ in some way. Is the view we disagr...
... punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.
Have you done the math? This would have important implications for the development of intolerant societies - it was clearly crucial to Nazism - but I've never heard of any studies on the subject. People are still working on first-order punishment.
A good reference on that; Simon Gächter, Elke Renner, and Martin Sefton, "The Long-Run Benefits of Punishment", Science 5 December 2008 322: 1510 [DOI: 10.1126/...
Whether someone agrees with us isn't as important as why.
If someone has sufficiently low standards of quality that they fail to disapprove of even the worst garbage, then they're of little use in distinguishing value from nonsense.
As a great deal of nonsense is not only passively but actively harmful (not just failing to be correct, but inclining people towards error), it is vitally important to tell the two apart. People who can't or won't do this are not only not-helpful, but make our tasks harder.
Strive to have good standards and apply them. Don't worry about being tolerant or intolerant -- the right mix of behaviors will naturally arise from the application of correct standards.
The communities that I've been a part of which I liked the best, which seemed to have the most interesting people, were also the nastiest and least tolerant.
If you can't call a retard a retard, you end up with a bunch of retards, and then the other people leave. When eventually someone nice came to power, this is invariably what happened.
Eliezer isn't suggesting that you refrain from calling fools "fools". He's suggesting you tolerate people who are otherwise non-foolish except that they don't call fools "fools".
Tolerating fools might not be a good idea. Tolerating non-fools who themselves tolerate fools is, AFAICT, a glaringly good idea. If you create an atmosphere where everyone has to hate the same people... we run into some of the failure modes of objectivism.
In Hanson and Simler's 'The Elephant in the Brain', they mention Axelrod's (1986) "meta-norm" modelling which shows that cooperation is stable only when non-punishers are punished.
Just a small point-- tolerating tolerance seems to me to be a less powerful tool than the principle of charity, of which plenty has been said on this site. For me, the image:
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning.
doesn't even start to feel right for me from a 'should' perspective (though it is quite familiar from an 'is' perspective). My image of a rationalist is someone exceptionally concerned with making sense of what others are saying, because arguments are not battles.
I have a massively huge problem with this. Every time a non-fiction author or scientist I respect gives credit to a non-rational I cringe inside. I have to will myself to remember that just because they have a lower rationality threshold, does not automatically discredit their work.
IAWY. However, regarding the practice of reminding yourself every time in order to prevent the behavior, why expend two units of mental force, opposing each other, when you could just remove both forces? It'd be more efficient just to get rid of whatever underlying belief or judgment makes you feel the need to be intolerant of the tolerant... and you'd suffer less.
I'm programmed to get angry when there's misbehavior and I don't know that I can just shut this off when the misbehavior consists of underpunishing. Maybe I should try channeling the anger toward the nonpunishee rather than the nonpunisher?
This post has motivated me to put my foot down aroudn one friend who is so bitchy about others.
One of the likely characteristics of someone who sets out to be a "rationalist" is a lower-than-usual tolerance for flaws in reasoning. This doesn't strictly follow. You could end up, say, rejecting your religion, just because you spotted more or deeper flaws in the reasoning, not because you were, by your nature, more annoyed at a flaw of fixed size. But realistically speaking, a lot of us probably have our level of "annoyance at all these flaws we're spotting" set a bit higher than average.
That's why it's so important for us to tolerate others' tolerance if we want to get anything done together.
For me, the poster case of tolerance I need to tolerate is Ben Goertzel, who among other things runs an annual AI conference, and who has something nice to say about everyone. Ben even complimented the ideas of M*nt*f*x, the most legendary of all AI crackpots. (M*nt*f*x apparently started adding a link to Ben's compliment in his email signatures, presumably because it was the only compliment he'd ever gotten from a bona fide AI academic.) (Please do not pronounce his True Name correctly or he will be summoned here.)
But I've come to understand that this is one of Ben's strengths—that he's nice to lots of people that others might ignore, including, say, me—and every now and then this pays off for him.
And if I subtract points off Ben's reputation for finding something nice to say about people and projects that I think are hopeless—even M*nt*f*x—then what I'm doing is insisting that Ben dislike everyone I dislike before I can work with him.
Is that a realistic standard? Especially if different people are annoyed in different amounts by different things?
But it's hard to remember that when Ben is being nice to so many idiots.
Cooperation is unstable, in both game theory and evolutionary biology, without some kind of punishment for defection. So it's one thing to subtract points off someone's reputation for mistakes they make themselves, directly. But if you also look askance at someone for refusing to castigate a person or idea, then that is punishment of non-punishers, a far more dangerous idiom that can lock an equilibrium in place even if it's harmful to everyone involved.
The danger of punishing nonpunishers is something I remind myself of, say, every time Robin Hanson points out a flaw in some academic trope and yet modestly confesses he could be wrong (and he's not wrong). Or every time I see Michael Vassar still considering the potential of someone who I wrote off as hopeless within 30 seconds of being introduced to them. I have to remind myself, "Tolerate tolerance! Don't demand that your allies be equally extreme in their negative judgments of everything you dislike!"
By my nature, I do get annoyed when someone else seems to be giving too much credit. I don't know if everyone's like that, but I suspect that at least some of my fellow aspiring rationalists are. I wouldn't be surprised to find it a human universal; it does have an obvious evolutionary rationale—one which would make it a very unpleasant and dangerous adaptation.
I am not generally a fan of "tolerance". I certainly don't believe in being "intolerant of intolerance", as some inconsistently hold. But I shall go on trying to tolerate people who are more tolerant than I am, and judge them only for their own un-borrowed mistakes.
Oh, and it goes without saying that if the people of Group X are staring at you demandingly, waiting for you to hate the right enemies with the right intensity, and ready to castigate you if you fail to castigate loudly enough, you may be hanging around the wrong group.
Just don't demand that everyone you work with be equally intolerant of behavior like that. Forgive your friends if some of them suggest that maybe Group X wasn't so awful after all...