I am extremely wary of this kind of thinking. Partly because using power is a slippery slope to abusing power, and each time you use the banhammer on a maybe-troll it gets a little bit easier to use it on the next maybe-troll.
Not just because of that, but also because when other people come to a community full of self-purported rationalists, and they see someone who does not obviously and immediately pattern match as a troll receiving the banhammer for presenting community-disapproved opinions in what seems superficially to be an adequately calm and reasonable manner, that sets off the 'cult' alarms. It makes us look intolerant and exclusionary, even if we aren't.
It's fine for places like the SA forums to throw the banhammer around with reckless abandon, because they exist only for fun. But we have higher goals. We have to consider not just keeping our garden tidy, but making sure we don't look like overzealous pruners to anybody who has a potentially nice set of azaleas to contribute.
Slipper slopes work in both directions. Each time you don't strike down injustice, it becomes a bit easier to walk by the next time. I'd sooner have Marginal Value > Marginal Cost than Marginal Value < Marginal Cost and a lower Average Value.
Bad impressions work in both directions. When other people come to a community full of self-purported rationalists, and they see someone presenting stupid, low-status, incendary comments and being treated as worthy of respect, it makes LW look stupid, low-status and incendary because of the Representativeness Heuristic.
Obveously there is a continuum between anarchy and banning everything, and both extremes are local minima. The issue is to judge the local gradient
I'm trying to develop a large set of elevator pitches / elevator responses for the two major topics of LW: rationality and AI.
An elevator pitch lasts 20-60 seconds, and is not necessarily prompted by anything, or at most is prompted by something very vague like "So, I heard you talking about 'rationality'. What's that about?"
An elevator response is a 20-60 second, highly optimized response to a commonly heard sentence or idea, for example, "Science doesn't know everything."
Examples (but I hope you can improve upon them):
"So, I hear you care about rationality. What's that about?"
"Science doesn't know everything."
"But you can't expect people to act rationally. We are emotional creatures."
"But sometimes you can't wait until you have all the information you need. Sometimes you need to act right away."
"But we have to use intuition sometimes. And sometimes, my intuitions are pretty good!"
"But I'm not sure an AI can ever be conscious."
Please post your own elevator pitches and responses in the comments, and vote for your favorites!