One meta-hazard would be that "community hazards" could end up defined far too broadly, encompassing anything that might make some people feel uncomfortable and simply become a defense for sacred values of the people assessing what should constitute "community hazards".
Or worse, that the arguments for one set of positions could get classified as "community hazards" such that, to use a mind-killing example, all the pro-life arguments get classified as "community hazards" while the pro-choice ones do not.
So it's probably best to be exceptionally conservative with what you're willing to classify as a "community hazard"
Good point about that. I think it's a matter of trade-offs - my take is that anything that an aspiring rationalist I trust classifies as a community hazard is a community hazard. For instance, one rationalist I know had a traumatic experience with immigration into the US, and as a result has PTSD around immigration discussions. This makes immigration discussions a community hazard issue in our local LW meetup, due to her particular background. It wouldn't be in another setting. So we hold immigration discussions when she's not there.
However, the broad poi...
Information Hazards and Community Hazards
As aspiring rationalists, we generally seek to figure out the truth and hold relinquishments as a virtue, namely that whatever can be destroyed by the truth should be.
The only case where this does not apply are information hazards, defined as “a risk that arises from the dissemination or the potential dissemination of (true) information that may cause harm or enable some agent to cause harm.” For instance, if you tell me you committed a murder and make me an accessory after the fact, you have exposed me to an information hazard. In talking about information hazards, we focus on information that is harmful to the individual who receives that information.
Yet a recent conversation at my local LessWrong meetup in Columbus brought up the issue of what I would like to call community hazards, namely topics that it would be dangerous to talk about in a community setting. These are topics that are emotionally challenging and hold the risk of tearing apart the fabric of LW community groups if they are discussed.
Now, being a community hazard doesn’t mean that the topic is off-limits, especially in the context of a smaller, private LW meetup of fellow aspiring rationalists. What we decided to do is that if anyone in our LW meetup decides a topic is a community hazard, we would go meta and have a discussion about whether we should discuss the topic. We would examine whether discussing it would be emotionally challenging and how challenging it would be, whether discussing it holds the risk of taking down Chesterton’s Fences that we don’t want taken down, whether there are certain aspects of the topic that could be discussed with minimal negative consequences, or if perhaps only some members of the group would like to discuss it and then they can meet separately.
This would work differently in the context of a public rationality event, of course, of the type we do for a local secular humanist group as part of our rationality outreach work. There, we decided to use moderation strategies to head off community hazards at the pass, as the audience includes non-rationalists who may not be capable of discussing a community hazard-related topic well.
I wanted to share about this concept and these tactics in the hope that it might be helpful to other LW meetups.