Epistemic Status: Strong opinions weakly held. Mostly trying to bring some things into the discourse that I think are too often ignored.
Some updates I've made based on the discussion in this post are here .
Introduction
Jessicata's Dialogue on Appeals to Consequences is an expansion of a response that she wrote to me a few months ago, arguing a particular point that I agree with: Namely, if you have an object level thing you want in the world, it's almost never worth lying or withholding information about that thing, because it breaks meta level norms about truthseeking that are much more important to accomplishing object level goals in general. However, there's a slightly more interesting case that I think is quite murkier, that the original comment was pointing to. That is, what if your truthseeking norms are in tension with OTHER meta level norms that are important? In general, how do you deal with instances where tensions between two important values cause you to not know what to do?
Dialogue
Let's imagine John and Jill are discussing John's behavior in a private space. Jill is a leader of the space, and John is someone who frequently attends the space and has lively discussions trying to get to the truth.
Jill: John, I've had several complaints about your tendency to steer conversations towards the divisive topic that everyone should be a Vegan, and I'm going to ask you to tone it down a bit when you're in our main space.
John: Are people saying that I'm making arguments that are false?
Jill: No, no one is saying that you're making false arguments. John: Are people saying that I'm derailing the conversation? I think you'll find that every instance I brought up veganism was highly relevant to the conversation.
Jill: Yes, some people have said that, but I happen to believe you when you say that you've only brought it up in relevant contexts for you.
John: Then what's the problem? I'm stating relevant true beliefs that add to the totality of the conversation and steer it in conversationally relevant directions.
Jill: The problem is twofold. Firstly, people find it annoying to retread the same conversation over and over. More importantly, this topic usually leads to demon conversations, and I fear that continued discussion of the topic at the rate its' currently discussed could lead to a schism. Both of these outcomes go against our value of being a premiere community that attracts the smartest people, as they're actually driving these people away!
John: Excuse me for saying so, but this a clear appeal to consequence!
Jill: Is it? I'm not saying that the negative consequences to the community mean that what you're saying is false - that would be a clear logical fallacy. Instead I'm just asking you to bring up this argument less often because I think it will lead to bad outcomes.
John: Ok, maybe it's not a logical fallacy, but it is dangerous. This community is built on a foundation of truth seeking, and once we start abandoning that because of people's feelings, we devolve into tribal dynamics and tone arguments!
Jill: Yes, truthseeking is very important. However, It's clear that just choosing one value as sacred , and not allowing for tradeoffs can lead to very dysfunctional belief systems,.I believe you've pointed at a clear tension in our values as they're currently stated. The tension between freedom of speech and truth, and the value of making a space that people actually want to have intellectual discussions at.
John: You're saying there's a tension, but to me there's a clear and obvious winner. Under your proposed rules, anyone will be able to silence anything simply by saying they don't like it!
Jill: If I find someone trying to silence good arguments through that tactic, I'll sit them down and have a similar conversation to the one we're having now.
John: That's even worse! That means that instead of the putting the allowed conversation topics up to vote, we're putting them in the hands of one person, you! You can silence any conversation you want.
Jill: I can see how it would seem that way, but I believe we've cultivated some great cultural norms that make it harder for me to play to political games like that. Firstly, our norm of radical transparency means that this and all similar conversations I have like this will be recorded and shared with everyone, and any such political moves by me will be laughably transparent.
John: That makes sense. Also, Hi Mom!
Jill: Second, our organization allows anyone to apply the values to anyone else, so if you see ME not following the values in any of my talks, you can call me out on it and I'll comply.
John: Sure, you say that now, but because of your role you can just defy that rule whenever you want! Jill: That's true, and it's one of the reasons I've worked to cultivate integrity as a leader. Has there been any instance of my behavior where you think I would actually do that?
John: No I suppose not. Are there any other cultural norms preventing you from using the arbitrary nature of decisions for your own gain? Jill: There's one more. Our organization has a clear set of values, and as the leader one of my roles is to spearhead the change the values in clear ways when there's tension between them. So I'm not just going to talk to you, I'm actually going to suggest to the organization that we clarify our values such that they tell us to do in these relatively common situations, and I'm going to have you help me.
John: I think that makes sense. We can probably make a list of topics that people are allowed to taboo, and a list of topics people are not allowed to taboo, and then I'll always know what it's ok to "appeal to consequences" on. Jill: I'm afraid that particular rule would be unwise. I think there's practically unlimited scissor statements that could cause schisms in our community, and a skilled adversary could easily find one that's not on our list of approved topics. No, I'm afraid we'll need to make a general value that can cover these situations in the general case.
John: Oh, so trying to avoid appeals to consequence argument can actually be used by someone looking to harm our community? That's interesting! But it's not clear to me that there is a general rule that can cover all the cases.
John: I'm not sure I get it.
Jill: Well, you have a need to express that everyone should be a vegan. It's clearly very important to you, or you wouldn't bring it up so much. At the same time, many of the people in our community have a need to have variety in their conversation, and you should be aware of this when talking with them. Finally, our organization has a need to not experience/discuss scissor statements too often or too frequently, in order to remain healthy and avoid frequent schisms. By bringing this topic up so much, you're putting your needs above the needs of others you're interacting with and the group, instead of bringing it up less frequently, which would be placing the needs on equal ground.
John: That makes sense. I suppose by the same token, if there's a really interesting topic that's helpful for the group to know about, and lots of people want to talk about, it would be putting your own needs above others needs if you said it hurt your feelings so people couldn't talk about it.
Jill: Exactly!
John: So this rule seems plausible to me, and I'm sure it would be great for many people, but I have to admit its' not for me. I'd much prefer a space where people are allowed to say anything they want to me, and I can say anything I want to them in return.
Jill: I agree that this may not be the best rule for everybody. That's why next week we're going to start experimenting with The Archipelago Model. As I said, I want you to tone it down in the main room, which follows the Maturity value mentioned above. However, we've designated a side room that instead follows Crocker's Rules. You're allowed to go to either room, but when in that room, must follow the stated values of the room. And most importantly, all conversations are recorded and can be listened to by anyone in the community!
John: Cool, that seems worthwhile, but very messy and likely to have numerous hidden failure modes...
Jill: I agree, but it at least seems worth a shot!
Commentary
So you probably noticed already, but this post wasn't really about Appeal to Consequences at all. Instead, it's a meditation on how good organizations deal with tensions in their values, and avoid the organization being overrun by skilled sociopaths. A lot of these suggestions and ideas come from the work I've been doing over the past year or so to figure out what makes great organizations and communities. I'd be particularly interested in peoples' inner sim of how the organization described by John and Jill above would go horribly wrong, and counter ideas about what could be done to fix THOSE issues.
I had trouble reading this as it felt like there were a lot of presumptions in conflict.
If you let people bring random norms in the quality of the discussion will be random. In selecting some norms to be non-random the site must somehow encourage certain norms and suppress incompatible ones. If people were "smart" they could know the norms beforehand and then all speech would be flawless from a norm standpoint. But the interesting case is when someone in the discussion fails to effectively employ a norm. After the norm discussion speech should be norm compliant and I think change from non-compliant to compliant means supressing the "offending" parts. If no supression happens the harmful elements are left alive to do their damage.
Thus if a norm-officer is talking to you there is some goal how your speech is supposed to change. It's good goal to try get to this goal by selling why the norm is a good idea. However I think the norm will be or should be enforced even if such "selling" fails. At the very least the moderator needs to make a call whether the conversation is sufficient remedy for the detected danger or whether the issue should be escalated to less discussive "actual action" levels. If the discussion concluded "I will raise veganism just as often as I have previously" escalation would be the conclusion (or some kind of weird thing were the defiance tries to upset the whole norm structure with the defiant party banking on that the wider community will overrule the moderators effected principles).
Compliant people wil lnot constatnlyu trigger the norm-violations but that places a limit on effectively expressible stances and I think the effective communicaiton what does and does not cause schism falls outside of that and the closest thing you get is a kind of plausible stereotype needs. At worst talking about what does and does not cause a schism causes a schism which would trigger supression of schism analysis. Thus what a schism is is largely depend on the inertia of how it was understood when schism supression is implemented and is less responsive to peoples actual needs. My thoughts might be too muddy about it but it might culminate to a point where there is a "it is a norm violation to have those private values or declare them as targets that the system should care about even in slight degree".
I don't know the following model it is too shaky a model but I am banking on the norm of describing how you think rather than what is convincing. Have starting situations
*1: 1000 flies, 1 human
and a later distribution of
Assume there is a brown substance that is olfactorily attrractive to flies and repulsive for humans. a situation that goes from 1 to 2 is likely to be substance decorated as all humans that joined must be "fly needs compatible" or atleast find the whole deal of the community participation to be worth it overall. However if there is no such inertia then a situation that goes from A to B means the substance decoration will be introduced (and would be conductive to cause the abandon rate of the humans to skyrocket). The scheme of making the minority to conform relieves value tension but makes the overall values of the organizaiton to drift. What started as a human organization but drifted to a fly organization might no longer be human-aligned. For entities that try to survive this might not matter that much. But for things that are tools if it starts to do another task that can plausibly be counted as a malfunction (althoguth if my hammers randomly morphed into saws I might still find saws not to be useless but if I made a highly spesific tool and it morphed into a generic one I would probably be pretty upset).
I guess there aer two distinct points. If you allow changes based on how your organization serves the general lifes of it's participants this will drift the communitys purpose away from being highly specialised in one task. And that majorities can't be relied onto keep the macro aligment stable, it's not that we are trading microaligment for better macroaligment but that there is a real chance that macroaligment will also be compromised (or I am missing the fence that keeps mission critical aligment operating on different rules than irrelevant aligment)