I agree with the reasoning in this essay.
Taken a bit further, however, it explains why valuing "safety" is extremely dangerous - so dangerous that, in fact, online communities should consciously reject it as a goal.
The problem is that when you make "safety" a goal, you run a very high risk of handing control of your community to the loudest and most insistent performers of offendedness and indignation.
This failure mode might be manageable if the erosion of freedom by safetyism were still merely an accidental and universally regretted effect of trying to have useful norms about politeness. I can remember when that was true, but it is no longer the case.
These days, safetyism is often - even usually - what George Carlin memorably tagged "Fascism masquerading as good manners". It's motivated by an active intention to stamp out what the safetyists regard as wrongspeech and badthink, with considerations of safety an increasingly thin pretext.
Whenever that's true, the kinds of reasonable compromises that used to be possible with honest and well-intentioned safetyists cannot be made any more. The only way to counterprogram against the dishonest kind is radical rejection - telling safetyism that we refuse to be controlled through it.
Yes, this means that enforcing useful norms of politeness becomes more difficult. While this is unfortunate, it is becoming clearer by the day that the only alternative is the death of free speech - and, consequently, the strangulation of rational discourse.
This is kind of true, but taken seriously it only leaves "freedom" as an achievable goal, which I don't think is right. I didn't say much about it because it seems to me that this kind of weaponized safety is not a general feature of online communities, but rather a feature particular to the present moment, and the correct solution is on the openness axis: don't let safetyists into your community, and kick them out quickly once they show their colors.
Also, the support for "safety" among these people is more on the level of slogan than actual practice. My experience is that groups which place a high priority on this version of "safety" are in fact riven with drama and strife. If you prioritise actual safety and not just the slogan, you'll find you still have to kick out the people who constantly hide behind their version of "safety".
I agree -- I think there are many communities which easily achieve a high degree of safety without "safetyism", typically by being relatively homogeneous or having external sources of trust and goodwill among their participants. LW is an example.
I think there is an important distinction between being "safe" from ideas and "safe" from interpersonal attacks. In an online space, I expect moderation to control more for the latter than the former, protecting not against wrongspeech so much as various forms of harassment.
Rational discourse is rarely possible in an environment that is not protected by moderation or at least shared norms of appropriate communication (this protection tends to be correlated with "openness"). Having free speech on a particular platform is rarely useful if it's drowned out by toxicity. I support strong freedom of ideas (within the bounds of topicality) but when the expression of those ideas is in bad faith, there is no great value in protecting that particular form of expression.
There is a hypothesis that that unconstrained speech and debate will lead to wrong concepts fading away and the less wrong concepts rising to more common acceptance but internet history suggests that this ideal can't last for long in a truly free space unless that freedom is never actually tested. As soon as bad actors are involved, you either have to restrict freedom or else experience a degradation in discourse (or both). If safety is not considered, then a platform effectively operates at the level of its worst users.
I agree that the distinction you pose is important. Or should be. I remember when we could rely on it more than we can today.
Unfortunately, one of the tactics of people gaming against freedom is to deliberately expand the definition of "interpersonal attack" in order to suppress ideas they dislike. We have reached the point where, for example:
Can you propose any counterprogram against this sort of dishonesty other than rejecting the premise of safetyism entirely?
I've noticed in consistently good moderation that resists this kind of trolling/power game:
Making drama for the sake of it, even with a pretense, is usually regarded as a more severe infraction that any rudeness or personal attack in the first place. Creating extra work for the moderation team is frowned upon (don't feed the trolls). Punish every escalation and provocation, not just the first in the thread.
Escalating conflicts and starting flamewars is a seen as more toxic than any specific mildly/moderately offensive post. Starting stuff repeatedly, especially with multiple different people is a fast ticket to a permaban. Anyone consistently and obviously lowering the quality of discussions needs to be removed ASAP.
As long as people are dishonestly gaming the system, there will always be problems and there is no silver bullet solution. It's a fundamentally hard problem of balancing competing values. Any model proposed will have failings. The best we can do is to try to balance competing values appropriately for each individual platform. Each one will have different tilts but I doubt rejecting safety entirely is likely to be a good idea in most cases.
It's often tempting to idealize one particular value or another but when any particular value is taken to an extreme, the others suffer greatly. If you can back away from a pure ideal in any dimension, then the overall result tends to be more functional and robust, though never perfect.
I can't but help think about how this applies not just online communities but countries.
Some countries are open and free but not safe. I'm not sure what the best example of this is today. Mexico feels pretty open and free but not particularly safe.
Some countries free and safe but not open. Japan comes to mind: pretty safe, people are pretty free, but it's basically impossible to immigrate (at best you can live and work there long-term as an outsider).
Some countries are open and safe but not free. Places like Singapore and UAE let lots of people come from all over the world but you got to live by the comparatively restrictive local rules or you're out.
The whole thing is complicated though because industrialization seems to offer a Pareto improvement to all three such that industrialized nations are more open, free, and safe than non-industrialized and developing countries, although this isn't always true since industrialization also precipitated some totalitarian states that were less open, free, and safe than what came before. Maybe industrialization is just a force multiplier here but doesn't change the underlying tradeoffs being made?
WRT industrialism: I think that the issue is a difference of where it can arise versus what you can do with it once you have it. Early industrialism needed the cultural and material milieu of England and northern Europe in order to form, and a lot of the positive developments we associate with it are in fact baseline attributes of those regions. But once it had been developed, it could be exported to regions with a different cultural and political mix, and in those places it mostly just acted as a force multiplier.
This is very speculative, though.
The internet is open and free. But it's able to host platforms/communities making other types of tradeoffs, as your examples show. When the internet was smaller (less open), it didn't have the demand to support such a broad range of choices, which seems like a worse state of affairs to me.
Public school is open and safe-ish. When schools need more freedom, they trade away openness to get it. Only advanced chemistry students who've proven their capabilities are allowed to use the Bunsen burners. Only the older students are allowed to go off campus for lunch. Private schools and homeschooling is great too, but if your choice was a private school or no school at all, that seems like a worse state of affairs as well.
Scientific research is free and safe. But to get grants, publications, tenure, and grad students, you have to prove your merit as a scientist. So it's rather closed.
So it seems to me that openness is useful when we can use it to create a neutral platform for accessing lots of options where different tradeoffs are being made. Openness is like the top of the funnel: we're happy to trade it away for progressively more safe/free spaces that are well-suited to our individual abilities and preferences. But we need that open funnel-top to start navigating our way there in the first place. I agree that there are tradeoffs, but I don't think it makes sense to be against openness.
But we certainly shouldn't be enforcing openness where it's not optimal. For example, we should not be enforcing that all students must go to public school. We should not force people to associate their identity with their internet activity.
Examples of "Free, safe, not open" would be private communities, such as the r/Lawyers subreddit, where only actual lawyers are granted access.
Great post, thanks for sharing. I have been picking away at this area ("healthy communities") and one of my take-aways mirrors your conclusion that "safe & free" is the more correct choice and that "openness" is very risky. I see it as the need for strong, clear moderation as a force for setting and enforcing community norms. I have seen many web 1.0 forums dissolve into 4chan-like chaos due to lack of moderation. I've also seen software projects dissolve into an immobile mess when their community decided to only do "safe" and discard "free" completely.
A recent article by Anne Applebaum and Peter Pomerantsev about these topics points out that anonymity ("openness") creates problems. An example they bring up later, The Front Porch Forum, which has strong moderation rules, requires a real identity to sign up, and is limited to those who live in Vermont and parts of NY, is a great example of "safe & free" in action. This appears to be a growing trend, a reaction against the overpowering wave of social media, and I'm interested to see how it will play it out in the next few years.
The problem (or at least a problem) with seeing moderation this way is that moderators who are aware of the concepts at all tend to say that criticism of arbitrary moderation amounts to criticism for not being open--in other words, once you accept that openness is a bad idea, pretty much anything the moderators do becomes justified on that basis.
I only skimmed the article you linked, but I don't think I agree with your characterization of anonymity -- it's not on the axis of openness, but the axes of freedom and safety. If someone is already let in, they may choose to be anonymous if they think anonymity helps them express things that are outside the box of their existing reputation and identity, or if they think anonymity is a defense against hostile actors who would use their words and actions against them.
Good point. I think my characterization was overly broad, where in my mind, I was picturing anonymous registration, eg. not checking identity at the gate, allowing anyone in, even multiple times.
I can imagine a community picking one vs two.
E.g., a community of enthusiasts for some niche interest might prefer open, not free, and not safe.
Why? If they're closed, they don't have enough members for their niche. If they're free, the niche will get watered down to more-casual adjacent interests and discussions. The quality of discussion will drop for the main users. And not safe because they'll intentionally have to bully and gatekeep to maintain social norms and high expectations for level of discussion.
Epistemic status: More of a heuristic than an iron-clad law. Hopefully useful for decision-making, even if it's not a full gears-level model.
Related: https://www.lesswrong.com/posts/vHSrtmr3EBohcw6t8/norms-of-membership-for-voluntary-groups
In software engineering, there is a famous dictum which Wikipedia knows as the Project Management Triangle, which takes the following form:
The essence of the triangle is to point out that while good, fast, and cheap are all potentially desirable traits for a project, you cannot usually (or ever) get all of them at once. If you want something good and fast, it will not be cheap; if you want something fast and cheap, it will not be good.
I have observed that a similar triangle seems to govern communities, especially online communities. I phrase my triangle in a similar fashion:
Let's consider what this means.
The three vertices of the triangle
These words indicate something specific in the context of online communities:
It should be obvious that all three of these traits are a spectrum rather than a simple binary. No group is entirely open (there's always some kind of requirement to get in, even if that requirement is "you know how to use a keyboard'), and no group is completely closed once it has more than one member. Freedom exists on a scale, as even the freest groups ban outright spam. Safety, too, is never absolute, as interpersonal conflict is always possible even if the local culture discourages it.
Case studies of each type
Open, free, not safe: 4chan. 4chan is trivial to join. You can do and say almost whatever you want, there, and behaviour which would get you kicked out of most other places on the internet is totally acceptable, even normal on 4chan. As a result, 4chan is famously, outrageously, extravagantly unsafe. Abuse, mockery, insult and every other form of verbal violence that you can imagine are typical on 4chan. This becomes the main thing that 4chan is known for, alongside being a potent fermenter of memes. (These two things are probably related.)
Open, safe, not free: Stack Overflow and the related Stack Exchange network. Stack Overflow advertises itself as a site for professional developers, but there is no one checking this, and anyone can (and does) participate. SO, however, is explicitly not free. There are only two supported social verbs: "ask question" and "answer question", and the site's guidelines are quite strict about this fact. Notably, the verb "have conversation" is deliberately lacking. The commenting system is the only place where something like a back-and-forth exchange is allowed, and even there both custom and policy discourage extended argument. This strictness about the kinds of allowed interactions is arguably the main thing that allows the site to work.
Of course, one of the long-running complaints about SO is that it isn't safe enough, particularly for new users who haven't yet learned the community customs.
Free, safe, not open: Metafilter. It's actually kind of hard to think of examples of this category, because sites which aren't open tend not to enter the public consciousness. I choose Metafilter as one of the only sites I know of which is explicitly non-open (you have to pay a fee in order to get an account), but which has cultivated a long-standing reputation as a haven of good discourse, a fact which is doubtlessly related to this fact.
But really, the best example of this third type is whatever your favourite Web 1.0 forum was. Most good forums of this type had the virtues of freedom and safety because they were implicitly closed: web access was rarer back then, signup was often slow and tedious, the ostensible topics was uninteresting, and the site itself was so obscure that most people would never find it. These barriers to entry sufficed to make the group non-open in practice, even if there was no explicit rule keeping people out. But as has been noticed elsewhere (cf. https://www.lesswrong.com/posts/tscc3e5eujrsEeFN4/well-kept-gardens-die-by-pacifism), when implicit barriers to entry become too low many such communities collapse because they aren't prepared to do the necessary work to maintain their borders.
Failing at all three
Getting two out of three virtues is a maximum. My thesis is that it's not possible to have all three, but it is possible to have one or none.
I was witness to the decline (from my POV---others may not have perceived these events as a decline) of a group to help new professionals that failed in just this manner. The group was deliberately non-open: you had to apply to join, and your application required you to present work of acceptable level of professionalism. As a result, the tone and quality of on-topic discussions in the group were extremely high, and the group had a lot of pleasant socialising and off-topic discussion in their dedicated sub-fora. It was not open, but it was free and safe.
However, there was a faction within the group that decided that the forum wasn't safe enough, particularly for certain classes of people (stop me if you've heard this one before). Members of this faction first engaged in a bunch of high-energy confrontations, decreasing everyone's safety, then took those very confrontations as evidence that the group was unsafe. What followed was a slow downward ratchet in which the group became simultaneously less free and less safe, with an ever-proliferating thicket of regulations, moderators, and oversight groups, necessitated by a more and more frequent conflicts over the content of those very regulations and supposed infractions.
This should serve as a reminder that even getting two out of three is something of an accomplishment, and it's possible to get none.
The problems of big social media
Consider Twitter. Twitter is stuck between three incompatible demands:
What we observe them doing to "solve" these problem is an intermittent and incoherent attempt at automated moderation, supplemented by occasional human intervention against particular high-profile accounts. These efforts cannot really succeed in the form in which Twitter has deployed them, but they do signal (sort of) that Twitter is trying to limit freedom and enforce safety. I expect this to continue for the foreseeable future, as Twitter gradually and haphazardly becomes less free in order to ensure a minimal degree of safety, but doesn't put more effort than they have to.
Every other social media site faces the same dichotomy: they have to stay open for financial reasons, but too much freedom and they come under fire for lack of safety. The counterpoint to this development is Discord, Slack, WhatsApp, Signal and the like. What these groups have in common is that they don't put everyone into the same social universe, but rather create a framework in which you can easily and quickly create your own private groups, organised however you want, with local control over membership and moderation. This is a re-emergence of the Web 1.0 model, in which we once again have spaces which are safe and free because they have enforceable boundaries.
Against Openness
If it wasn't already obvious, I actually don't think that the choice between vertices of this triangle is neutral. There is a correct choice, and that choice is for freedom and safety, and against openness. Freedom and safety create great communities, while openness is at best a tax that groups must pay to avoid stagnation.
As described above, this is a choice that the big social media networks can't make, because they need to be open in order to function, and this is why I find so little value in them. The hopeful future for the future of internet communities is that the model of Discord etc. becomes the new norm.
And if you're running a site that's not on social media at all, a genuine web forum, then you should understand what your job is. Build a gate and keep it well.
But you already knew that.