Information Hazards and Community Hazards
As aspiring rationalists, we generally seek to figure out the truth and hold relinquishments as a virtue, namely that whatever can be destroyed by the truth should be.
The only case where this does not apply are information hazards, defined as “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” For instance, if you tell me you committed a murder and make me an accessory after the fact, you have exposed me to an information hazard. In talking about information hazards, we focus on information that is harmful to the individual who receives that information.
Yet a recent conversation at my local LessWrong meetup in Columbus brought up the issue of what I would like to call community hazards, namely topics that it would be dangerous to talk about in a community setting. These are topics that are emotionally challenging and hold the risk of tearing apart the fabric of LW community groups if they are discussed.
Now, being a community hazard doesn’t mean that the topic is off-limits, especially in the context of a smaller, private LW meetup of fellow aspiring rationalists. What we decided to do is that if anyone in our LW meetup decides a topic is a community hazard, we would go meta and have a discussion about whether we should discuss the topic. We would examine whether discussing it would be emotionally challenging and how challenging it would be, whether discussing it holds the risk of taking down Chesterton’s Fences that we don’t want taken down, whether there are certain aspects of the topic that could be discussed with minimal negative consequences, or if perhaps only some members of the group would like to discuss it and then they can meet separately.
This would work differently in the context of a public rationality event, of course, of the type we do for a local secular humanist group as part of our rationality outreach work. There, we decided to use moderation strategies to head off community hazards at the pass, as the audience includes non-rationalists who may not be capable of discussing a community hazard-related topic well.
I wanted to share about this concept and these tactics in the hope that it might be helpful to other LW meetups.
I think a very interesting trait of humans is that we can for the most part collaboratively truth-seek on most issues, except those defined as 'politics', where a large proportion of the population, with varying IQs, some extremely intelligent, believe things that are quite obviously wrong to who anyone who has spent any amount of time seeking the truth on those issues without prior bias.
The ability for humans to totally turn off their rationality, to organise the 'facts' as they see them to confirm their biases, is nothing short of incredible. If humans treated everything like politics, we would certainly get nowhere.
I think a community hazard would, unfortunately, be trying to collaboratively truth-seek about political issues on a forum like LessWrong. People would not be able to get over their biases, despite being very open to changing their mind on all other issues.
Is this really true? It seems that humans have the capacity to endlessly debate many issues, without changing their minds. Including philosophy, religion, scientific debates, conspiracy theories, and even math, on occasion. Almost any subject can create deeply nested comment threads of people going back and forth debating. Hell I might even be starting one of those right now, with this comment.
I don't think there's anything particularly special about politics. Lesswrong has gotten away with horribly controversial things before, like e.g. torture vs dust sp... (read more)