The principle I would draw is not, "you should separate the meta- and object-level discussions". Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn't be generated by backward-chaining from an ambiguous case. If people think that your meta-level argument about what circumstances it's okay to punch people, is actually about whether to punch Bob, then Bob's allies and enemies will engage with that conversation in a biased way. But separately, if people think that you had some Vaguebob in mind that they didn't know about, and that you might be Vaguebob's friend or enemy, then they'll rightly suspect you of being biased in the same way.
Rather, I think the important thing is that meta-level discussions of how social sanctions work shouldn't be generated by backward-chaining from an ambiguous case.
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion. First, this means the social alliances are in the open instead of hidden, second, this means that discussions over what principles actually bear on the situation becomes on-topic as well.
I think also it's quite difficult for people to think about tradeoffs in the abstract; "should annoying people be allowed at meetups?" is different from "should we let Bob keep coming to meetups?", and generally the latter is a more productive question.
The other option is making social sanctions preemptively, but there it's not clear what violations might be possible or probable, and so not making rules until they've been violated seems sensible. (Of course, many rules have been violated before in human experience, such that in forming a new group you might import existing rules.)
I think I disagree with this. When a social sanction is born of a particular case, I think it is quite important to actually have that case as a part of the discussion.
Clarification: what I meant is that it's better if the rules are created in a context where there are no cases pending for the rules to bear on; ie, I'm not objecting to admitting that a rules-discussion is about a specific case that it would bear on, but to it actually being about a specific pending case.
I think also it's quite difficult for people to think about tradeoffs in the abstract; "should annoying people be allowed at meetups?" is different from "should we let Bob keep coming to meetups?", and generally the latter is a more productive question.
I think discussing the latter question is less likely to produce the right result, though.
Clarification: what I meant is that it's better if the rules are created in a context where there are no cases pending for the rules to bear on; ie, I'm not objecting to admitting that a rules-discussion is about a specific case that it would bear on, but to it actually being about a specific pending case.
This is how things are often done in law, tho; I think "common law" is a pretty good way to grow a body of rules. You can then abstract out principles and refactor. It's not clear yet how much of this is about cost minimization; one of the benefits of deciding cases as they come up is you only ever need to decide as many cases as happened, which is not true if you try to decide cases before they become pending.
On the other hand, clear systematic codes may reduce the # of cases that come up (or evaluation time per case) by reducing ambiguity.
I agree with Raemon here. It would be good to think about ambiguous cases in advance, and I like the idea that fiction is one way of doing so.
But ambiguous cases are still going to come up, and you need to have some way of dealing with them. (And if you deal with them by never punching anyone, then you're encouraging bad actors to seek them out.)
I agree with this, but with the unfortunate caveat that I think people are most likely to think about when it's appropriate to harm people when they have some motivation to either harm or prevent someone from coming to harm.
And I'm not 100% sure if the takeaway of "think about which circumstances it's okay to harm people sometimes at random" is actually better (although I lean towards it).
Actually, it occurs to me that I've sort of been doing this via fiction.
My group house is currently watching "Walking Dead" which has a large number of instances of people having to negotiate with each other during high-stakes situations, where people disagree about object and meta level a lot. This has led to my house having a bunch of discussions about how the group-rationality of the characters is checking out, which is (mostly) divorced from considerations of actual real people.
This includes things like "it's necessary to punish Bob in this situation, even though Bob was object-level-right, because allowing people to act like Bob did willy-nilly would destabilize their fragile society". And this sort of thing happens at various scales, ranging from places where civilization is just 2 people, to civilization being a small town.
(If you want to consider cases where civilization is millions of people, you'll need to watch Battlestar Galactica instead)
Sure, it's fun to discuss what's right in bizarre situations, but that's very different from the decisions philh is talking about. I strongly doubt that your group house has decided "We like you, and that act was right for that situation, but we're going to punish you so others won't try it".
I totally buy the argument _IN GROUPS LARGE ENOUGH TO BE IMPERSONAL_ that you punish deviance from the norm, even when that deviance is correct and necessary. More hero they, who suffer for their necessary actions. Stanislav Petrov was a hero to disobey orders, and the Soviet government was correct to reprimand him.
I do not think this is true in groups smaller than some multiple of Dunbar's number. If you can discuss the specifics with a significant percentage of members, then you can do the right thing contextually, rather than blindly enforcing the rules (which, even for complex unwritten norms, are too simple for reality).
I strongly doubt that your group house has decided "We like you, and that act was right for that situation, but we're going to punish you so others won't try it".
We've definitely done things of the form "okay, in this case it seems like the house is okay with this action, but we can tell that if people started doing it all the time it'd start to cause resentment, so lets basically install a Pigouvian Tax on this action so that it only ends up happening when it's important enough."
In a TV show where stakes are life-and-death, the consequences might look like "banishment" and in a group house the consequences are more like "pay $5 to the house", but it feels like fairly similar principles at play.
You definitely do need different tools and principles as things grow larger and more impersonal, for sure. And I'd definitely like to see a show where the situations getting hashed out are more applicable-to-life than "zombie apocalypse." But I do think Walking Dead is a fairly uniquely-good-show at depicting group rationality though.
that’s very different from the decisions philh is talking about.
So, I've had the feeling from all of your comments on this thread that you think I'm talking about something different from what I think I'm taking about. I've not felt like going to the effort of teasing out the confusion, and I still don't. But I would like to make it clear that I do not endorse this statement.
Ok, then I'm very confused. "punching" is intentional harm or intimidation, typically to establish hierarchy or enforce compliance. If you meant something else, you should use different words.
Specifically, if you meant pigouvian taxes or coasean redress (both of which are not punitive, but rather fee-for-costs-imposed), rather than censure and retribution, then most of my disagreement evaporates.
I was thinking of actions, not motivations. If Alice wants to convince people to punch Bob, then her motivations (punishment, protection, deterrence, restoration) will be relevant to what sort of arguments she makes and whether other people are likely to agree. But I don't think they're particularly relevant to the contents of this post.
Not 100% sure I grok what philh meant in the first place, but I also want to note that I didn't mean for my example-from-fiction to precisely match what I interpreted philh to mean. It was just an easily-accessible example from thinking about the show and game theory.
I do happen to also think there are generalizable lessons from that, which apply to both punishment and pigouvian tax. But that was sort of accidental. (i.e. I quickly searched my brain for the most relevant seeming fictional example, found one that seemed relevant, and it happened to be reasonably relevant)
One could implement a monetary tax that involves shame and social stigma, which'd feel more like being punched. One could also have a culture where being punched comes with less stigma, and is a quick "take your lumps" sort of thing. There are benefits and tradeoffs to wielding shame/stigma/dominance as part of a punishment strategy. In all cases though, you're trying to impose a cost on an action that you want to see less of.
Your use of the word punching looks like clickbait. Your nonstandard use should come after your definition, and especially not in the title.
I would also note that every instance of the word "punch" and "punching" can be replaced by "sanction" or "sanctioning" and the denotational content of the essay would be virtually unchanged. The use of the word "punch" does little but smuggle in the connotations associated with physical violence, in an essay that is ostensibly about sanctions of all sorts, both physical and non-physical.
Edit: I have gone ahead and created a version of the essay with "punch" replaced by "sanction". I copied the essay into a new markdown document, fixed the formatting, and then ran %s/punch/sanction/g
in vim. I fixed one resulting spelling error, but other than that I left the document as-is.
When I saw Gurkenglas' comment, I had a quick think for "a name for the class of things that "punching" is a metaphor for", and didn't come up with anything. But I agree that "sanctions" fits, so thanks for supplying that word.
Still, I'm basically going to ignore this criticism. Not that it's necessarily unfair or incorrect or anything. (It doesn't strike me as particularly salient. But I may be atypical, or I may be too close to be objective.)
But I'm not confident I could have reliably anticipated it without also anticipating a bunch of other potential criticisms that would seem similarly important. And I have a hard enough time writing something that satisfies myself. I don't want to add more prune.
As an aside: I assume it's just an oversight, but I would prefer if you link your copy back to the original, since it's publicly listed.
This changed my mind about the parent comment (I think the first paragraph would have done so, but the example certainly helped).
In general, I don't mind added concreteness even at the cost of some valence-loading. But seeing how well "sanction" works and some other comments that seem to disagree on the exact meaning of "punch", I guess not using "punch" would have been better
I think it will be next to impossible to set up a community norm around this issue for all communities save those with a superhuman level of general honesty. For if there is a norm like this in place, Alice always has a strong incentive to pretend that she is punching based on some generally accepted theory, and that the only thing that needs arguing is the application of this theory to Bob (point 2). Even when there is in fact a new piece of theory ready to be factored out of Alice's argument, it is in Alice's interest to pass this off as being a straightforward application of some more general principle rather than anything new, and she will almost certainly be able to convincingly pass it off as such.
As soon as there is a community norm around building your new punching theories separately from any actual punches, anyone who can argue that their justification for punching Bob doesn't need any new theory, that is, that the justification follows trivially from accepted ideas, will have a that much stronger position. Thus, only the most scrupulous of punchers will ever actually implement step (1), and the norm will collapse.
To be clear, I think this is a good (prosocial) way for individuals to act. I'm not trying to advocate that we should make it a community norm.
But I'm unconvinced by this particular failure mode.
if there is a norm like this in place, Alice always has a strong incentive to pretend that she is punching based on some generally accepted theory, and that the only thing that needs arguing is the application of this theory to Bob (point 2).
Surely this incentive exists anyway for Alice? There's no existing norm against what I propose.
she will almost certainly be able to convincingly pass it off as such.
I don't see why this would be. At least not any general principle that her readers will be familiar with and agree with, which is what would be required.
I'm not suggesting that after Alice publishes part (2), people who don't think "punching Bob is better than the alternatives" should punch Bob. Alice doesn't just need to convince people that there is an argument for punching Bob, she needs to convince people to punch Bob.
A better split than abstract-specific (unless you're honestly trying to objectively describe best actions, without having any application in mind, is facts-evaluation-action.
Firs, get agreement on what Bob did or is doing. Bob may agree that it's happening, or you may have to provide evidence to convince people. Then get agreement that the behavior is not acceptable and needs to be stopped. Then separately again propose and agree on what actions you collectively (meaning you and the people you're trying to convince) will take to achieve this change in Bob's behavior or remove his ability to harm you. And finally (often combined into the previous), decide if Bob owes any recompense for past harms.
In most situations (unless you're a lawmaker or judge, or water-king of a post-apocalyptic tribe, or maybe a parent of the offender), you should not discuss or consider punishment AS punishment. Behavioral changes, recompense for damage caused, or exclusion are really the only considerations.
I don't think it's possible to decouple the arguments very completely, and attempting to do so is likely to backfire, when everyone notices that you published your abstract punching justification pretty much so you could get support when you punch Bob. I also think there's a real risk of accidental coordination problems - reasons for you to punch bob will be easy to overgeneralize, and then EVERYONE punches Bob, which is far too severe for whatever justification you thought you have.
I know this is supposed to be allegorical, but I think this applies to many related questions: A better policy is not punching people, ever. And not accepting or justifying that it's OK sometimes. Even as a response to Bob's punches, the proper response is to escape and then to address the behavior. This is almost NEVER effective if you start by repeating the behavior you want to prevent. Pepper-spray if needed in the moment, then intervention (if you care about Bob and think it may work) or arrest (if Bob's a stranger). Yes, this is escalation. Yes, this is the way to address Bob's unacceptable actions.
Related: be nice, at least until you can coordinate meanness.
A premise of this post is that punching people is sometimes better than the alternatives.
I mean that literally, but mostly metaphorically. Things I take as metaphorical punching include name calling, writing angry tweets to or about someone, ejecting them from a group, callout posts, and arguing that we should punch them.
Given that punching people is sometimes better than the alternatives, I think we need to be able to have conversations about when "sometimes" is. And indeed we can and do have those conversations. Many words have been spilled on the subject.
But I think it's probably a good idea to try to avoid having those conversations while actually punching people.
Here's what I mean. Alice thinks that punching Bob is better than the alternatives. But she thinks that if she just starts punching, Carol and Dave and Eve might not understand why. Not even if she tells them what Bob has done. She thinks punching Bob is better than the alternatives, but she thinks the reasons for that are slightly complicated and haven't previously been articulated very well, at least not in a way that makes them common knowledge.
So she writes an essay in which:
She proposes a theory for when punching people is better than the alternatives. (She readily admits that the theory is not complete, nor is it intended to be, but it covers part of the space.)
She describes the situation with Bob, and how the theory justifies punching him.
She punches Bob.
I think this could be a mistake. I think she should maybe split that post into at least two parts, published separately. In the first part, she proposes the theory with no mention of Bob. Then, if Carol and Dave and Eve seem to more-or-less agree with the theory, she can also publish the part where it relates to Bob, and punch him.
I think this has a few advantages.
Suppose Alice can't convince anyone that the theory holds. Then Bob is kept out of things entirely, unless Alice wants to go ahead and punch him even knowing that people won't join in. In that case, people know in advance that Alice is punching under a theory that isn't commonly subscribed to.
Suppose the theory is sound, and also justifies punching Fred. Then someone can link to the theory post separately, without implicitly bringing up the whole Bob thing. This is especially good if the theory doesn't actually justify punching Bob, but it's somewhat good regardless.
Suppose Bob disagrees with some part of the argument. When he gets punched, he's likely to be triggered or at least defensive. That's going to make it harder for him to articulate his disagreement. If it comes split up, the "thing he has to react to while triggered" may be smaller. (It may not be, if he has to react to the whole thing; but even then, he may have seen the first article, and had a chance to respond to it, before getting punched.)
Suppose that splitting-things-up like this becomes a community norm. Now, if Alice just wants to come up with excuses to punch Bob, it's harder for her to do that and get away with it, harder for her to make it look like an honest mistake.
It might seem even better to split into three posts: theory, then application ("and here's why that justifies punching Bob"), and then wait for another post to actually punch him. But since "arguing that we should punch Bob" is a form of punching Bob, splitting those two out isn't necessarily possible. At best it would be "theory, then application and mild punching, then full-strength punching". It's more likely to be worth it if there's a big difference between the two levels. "Here is why I think I should kick Bob out of the group" is considerably weaker than "I hereby kick Bob out of the group". But "here is why I think you all should stop trusting Bob now" is not much weaker than "you all should stop trusting Bob now".
However, I don't think this is always unambiguously a good thing. There are some disadvantages too:
You can't really remove the initial post from its context of "Alice thinks we should punch Bob". You can hide that context, but that doesn't remove its influence. For example, if there are cases similar to Bob's that would be covered by the same theory, Alice's post is likely to gloss over the parts of the theory that relate to them-but-not-Bob, and to focus too much on the parts that relate to Bob-but-not-them.
Suppose the theory is sound, but the facts of the case don't support punching Bob. Splitting the posts adds more opportunity for sleight-of-hand, such as using a term to mean different things in different places. This would be harder to notice in a split post than a monolithic post, if each part is internally consistent.
It may be harder to write this way, which may cause some better-than-the-alternatives punching to go unperformed.
It's slower. Sometimes that's probably neutral-to-good. But often, if punching someone is better than the alternatives, it's because they're currently hurting other people. If punching them will make them stop, then ideally we want to punch quickly.
I'm not sure how all these factors really shake out, and I expect it'll vary from case to case. So I don't want to offer a blanket suggestion. I think my advice is: if you're thinking of writing one of those all-in-one posts, consider splitting it up. It won't always be the right thing to do, but I think it's an option to bear in mind. Here are some questions to ask that might sway you in one direction or the other:
If the punching is delayed, does anything bad happen?
Does the theory apply more generally than it needs to for this specific case? Thinking of similar cases might help, especially real ones but also fictional. (If you can think of lots of real cases, the value of having a reference post for the theory goes up, and its value as a reference post goes up if it has less baggage.)
(As an aside: I want to note that a post which looks like an all-in-one might not be. It may be recapping previously established theory. Common knowledge is rarely absolutely common, so I suspect this will usually be a good idea.)
See for example, this post. (Though the reason I don't have examples here is different. My motivating example hasn't been written yet3, and I didn't go looking for others. Still, I expect the effects of not having examples are similar.) ↩
And not just you personally, but your audience. If your audience is large and vicious, then no matter how gently you yourself punch someone, they're going to experience a lot of pummelling. ↩
And there's a decent chance it won't ever, given my track record. ↩