There's a particular conversational move that I've noticed people making over the past couple years. I've also noticed myself making it. The move goes:
"You can't possibly succeed without X", where X is whatever principle the person is arguing for.
(Where "succeed" means "have a functioning rationality community / have a functioning organization / solve friendly AI / etc")
This is not always false. But, I am pretty suspicious of this move.
(I've seen it from people from a variety of worldviews. This is not a dig on any one particular faction from local-politics. And again, I do this myself).
When I do the move, my current introspective TAP goes something like: "Hmm. Okay, is this actually true? Is it impossible to succeed without my pet-issue-of-the-day? Upon reflection, obviously not. I legit think it's harder. There's a reason I started caring about my pet-issue in the first place. But 'impossible' is a word that was clearly generated by my political rationalization mind. How much harder is it, exactly? Why do I believe that?")
In general, there are incentives (and cognitive biases) to exaggerate the importance of your plans. I think this is partly for political reasons, and partly for motivational reasons – it's hard to get excited enough about your own plans if you don't believe they'll have outsized effects. (A smaller version of this, common on my web development team, is someone saying "if we just implemented Feature X we're get a 20% improvement on Metric Y", and the actual answer was we got, like, a 2% improvement, and it was worth it. But, like, the 20% figure was clearly ridiculous).
"It's impossible" is an easier yellow-flag to notice than "my numbers are bigger than what other people think are reasonable". But in both cases, I think it's a useful thing to train yourself to notice, and I think "try to build an explicit quantitative model" is a good immune response. Sometimes the thing is actually impossible, and your model checks out. But I'm willing to bet if you're bringing this up in a social context where you think an abstract principle is at stake, it's probably wrong.
Nod. The suggested tap of "build an actual model if you don't have one", or "doublecheck your model" (if you do), isn't meant to output "the statement is never true", just that you should check that you have a clear reason to believe it's true.
It hasn't been true the times I've noticed myself saying it.
I think it's more likely to be true in physical-system setups, where, like, your engine literally won't run if it doesn't have the right kind of fuel or whatever.
I think some instances have been a person posing a mathematical formalism and saying 'this must be true', and it was true in the mathematical example but not AFAICT in the real world analogue. (In this cases there's being some kind of Law/Toolbox conflation)