“If you really believed X, you would do violence about it. Therefore, not X.”
I’ve seen this a few times with X := “AI extinction risk” (maybe fewer, seems like something I’d be prone to overestimate)
This argument is pretty infuriating because I do really believe X, but I’m obviously not doing violence about it. So it’s transparently false (to me) and the conversation is now about whether I’m being honest about my actual beliefs, which seems to undermine the purpose of having a conversation.
But it’s also kind of interesting, because it’s expressing a heuristic that seems valid - memes that incite “drastic” actions are dangerous, and activate a sort of epistemic immune system in the “uninfected.”
In this particular case, it’s an immune disorder. If you absorb the whole cluster of ideas, it actually doesn’t incite particularly drastic actions in most people, at least not to the point of unilateralist violence. But the epistemic immune system attacks it immediately because it resembles something more dangerous, and it is never absorbed.
(And yes, I know that the argument is in fact invalid and there exist some X that justify violence, but I don’t think that’s really the crux)
As an aside, I think a lot of “woke” ideas also undercut the basic assumptions required for a conversation to take place, which is maybe one reason the left at its ascendancy was more annoying to rationalists than the right. For instance, explaining anything you know that a woman may not know can be mansplaining, any statement from a white person can be overruled by a racial minority’s lived experience, etc., and the exchange of object-level information becomes very limited. In this case also, there is a real problem that wokeness is trying to solve, but the “solution” is too dangerous because it corrosively and steadily decays the hard-won machinery underlying productive conversation.
I think the underlying (simple) principle is that a pretty high level of basic trust is required to