I moved the big meta-level comment thread from "Yes Requires the Possibility of No" over to here, since it seemed mostly unrelated to that top-level post. This not being on frontpage also makes it easier for people to just directly discuss the moderation and meta-level norms.
A case more troublesome than an ineffective standard is an actively harmful one. Part of the rationalist virtue sphere is recognising your actual impact even when it goes wildly against your expectations. Political speech being known to be a clusterfuck should orient as to "get it right" and not so much "apply solutions". People that grow up into harmony (optimise for harmony in agent speech) while using epistemology as a dumpstat are more effective in conversation safety. Even if rationalist are having more useful beliefs about other belief-groups the rational memplex being more distant from other memplexes means meaningful interaction is harder. We run the risk of having our models of groups such as theists advocate their interests rather than the persons themselfs. Sure we have distinct reasons why we can't implement group-interoperatibility the same way that they can/do implement it. But if we empatsize how little we value safety vs accuracy it doesn't make us move to solve safety. And we are supposedly good at intentionally setting out to solve hard problems. And it should be permissible to try to remove unneccary obstacles for people to join in the conversation. If the plan is to come up with an awesome way to conduct business/conversation and then let that discovery benefit others a move that makes discovery easier but sharing of the results harder might not move that much closer to the goal than naively only caring about discovery.