LessWrong is about learning rationality, and applying rationality to interesting problems.
An issue is that solving interesting problems often requires fairly deep technical knowledge of a field. To use rationality to help solving problems (especially as a group), you need both people who have skills in probability/meta-cognition/other-rationality skills, as well as the actual skills directly applicable to whatever problem is under discussion.
But if you show up on LW and post something technical (or even just "specialized") in a field that isn't already well represented on the forum, it'll be hard to have meaningful conversations about it.
Elsewhere on the internet there are probably forums focused on whatever-your-specialization is, but those places won't necessarily have people who know how to integrate evidence and think probabilistically in confusing domains.
So far the LW userbase has a cluster of skills related to AI alignment, some cognitive science, decision theory, etc. If a technical post isn't in one of those fields, you'll probably get better reception if it's somehow "generalist technical" (i.e. in some field that's relevant to a bunch of other fields), or if it somehow starts one inferential unit away from the overall LW userbase.
A plausibly good strategy is to try to recruit a number of people from a given field at once, to try to increase the surface area of "serious" conversations that can happen here.
It might make most sense to recruit from fields that are close enough to the existing vaguely-defined-LW memeplex that they can also get value from existing conversations here.
Anyone have ideas on where to do outreach in this vein? (Separately, perhaps: how to do outreach in this vein?). Or, alternately, anyone have a vague-feeling-of-doom about this entire approach and have alternate suggestions or reasons not to try?
Amusingly, the article you linked redirected to a different article which seems to reinforce your first point and I think helped clarify for me the exact dynamics of the situation. The author defends Dr. Littman's paper on what she terms 'rapid-onset gender dysphoria' against the heavy backlash it received (mostly on twitter, it seems) and especially Harvard's response to that backlash.
I find it difficult to imagine that healthy academic discourse could take place in an environment that conflict-heavy. Critically, this does not require the field itself to be nonsense but rather so deeply joined to the social justice culture war that the normal apparatuses of academia are hijacked.
This has raised my estimation of the risk of inviting gender studies researchers to participate in discussions on LW significantly, especially since as you point out, that risk runs in both directions.
There may still be ideas worth salvaging from the gender studies community and I'm really curious at what a 'rationalist gender studies' field looks like but the risk does look salient enough it may not be worth the effort.
You lost your meeting room because you were discussing (what I assume to be) politically sensitive topics. I think we'd agree that intellectual progress halts when important topics become too charged to touch and I don't want feminism to become like that in the rationalist sphere.
But rationalist sphere != LessWrong and perhaps this isn't the right place for progress in that area to happen. You bring up the differing approaches of SSC and LW and I actually quite like SSC's approach of high-discussion-norms while not shying from sensitive topics, but you're not wrong about paying a price for that.
So now I'm left wondering, if not here, then where? Where could rational-adjacent people sanely interact with feminists and sociologists and others in 'challenging' fields and what would the discussion there have to look like to keep people safe?
The answer might be 'nowhere'. This could be a fundamentally irreconcilable difference and if that's the case then I will be sad about it and move on. I don't think I have enough evidence to conclude this yet, but I will concede that is this place does exist, LessWrong probably isn't it.