I think a core part of this is understanding that there are trade-offs between "sensitivity" and "specificity", and different search spaces vary greatly in what trade-off is appropriate for it.
I distinguish two different reading modes: sometimes I read to judge whether it's safe to defer to the author about stuff I can't verify, other times I'm just fishing for patterns that are useful to my work.
The former mode is necessary when I read about medicine. I can't tell the difference between a brilliant insight and a lethal mistake, so it really matters to me to figure out whether the author is competent.
The latter mode is more appropriate for when I'm trying to get a gears-level understanding of something, and upside of novel ideas is much greater than the downside of bad ideas. Even if a bad idea gets through my filter, that's going to be very useful data when I later learn why it was wrong. The heuristic here should be "rule thinkers in, not out", or "sensitivity over specificity".
Unfortunately, our research environment is set up in such a way that people are punished more for making mistakes than they are for novel contributions. Readers typically have the mindset of declaring an entire person useless based on the first mistake they find. It makes researchers risk-averse, and I end up seeing fewer usefwl patterns.
But, consider, if you're reading something purely to enhance your own repertoire of useful gears, you shouldn't even necessarily be trying to find out what the author believes. If you notice yourself internally agreeing or disagreeing, you're already missing the point. What they believe is tangential to how the patterns behave in your own models, and all that matters is finding patterns that work. Steelmanning should be the default, not because it helps you understand what others think, but because it's obviously what you'd do to improve your models.
I think that how seriously one should take a person's half-baked idea depends very strongly on how well one knows that person.
To paraphrase your heuristic:
"If a stranger is unable to explain their idea convincingly and succinctly, the idea is probably either bad or unready for widespread dissemination".
I agree strongly there: No way can, nor should, everyone listen to all ideas from strangers.
However, if a friend or colleague who has had good ideas before is unable to explain their idea convincingly and succinctly, I'm likely to invest more time in trying to understand the idea. By doing so, I'm also likely to help them find out what works and what doesn't when it comes to communicating about it.
I expect that generally, trying an explanation on other people will not only improve the quality of the explanation but also stress-test the underlying concept with those listeners' questions and imaginations. So often (though not always), the convincingness and succinctness of an explanation is a proxy for whether that explanation is ready for more widespread sharing.
According to Tim Berners-Lee, explaining his ideas about the World Wide Web was at times quite challenging:
In terms of impact, it's unusual (but not unheard of) for ideas to rank more highly than the World Wide Web.
But, I suspect, it's not so unusual for ideas to be similarly difficult to grok (and sometimes much harder!).
And although it's not a perfect analogy, I think there is some relevance here to AI alignment, and ideas people propose.
We can't listen to everyone at length. There aren't enough hours in the day. So we need to form heuristics such as:
This is a useful heuristic. But the more strongly we rely on it, the more we are at risk of false negatives.