Nornagest comments on LW Women- Minimizing the Inferential Distance - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (1254)
Perhaps; I think part of the issue there is that there is a political debate and a sociological engineering project, and they keep shitting all over each other.
"I think if we raise boys and girls in gender-neutral environments, their inherent gender biases will be far less noticeable" is part of the sociological engineering project.
"No! You're turning them into lesbo feminazis and fairy faggots!" is the political-debate response.
"Fuck you! I'm dressing everyone unisex and attacking everyone who doesn't!" is the political-debate counter-response.
Note that while the counter-response is crazy, it's a predictable emotional response to the prior crazy, and shouldn't be blamed on its own. My assertion is that attacking people who say "I'm dressing everyone unisex and attacking everyon who doesn't!" isn't nearly as effective as attacking the people who set them off in the first place, and hoping that they'll calm down once they're not under severe stress from people who are crazier than they do and attack them without provocation.
Does that make sense?
That seems reasonable if there are no endogenous incentives rewarding crazy, but that seems like a questionable assumption for any ideology once it's gotten used to having crazy in its internal ecosystem.
I'd rather deal with that after the primary and initial source of crazy has been removed. Otherwise, it's too easy to accidentally mistake one for the other.
Rationalization being what it is, I suspect it'd be easy to mistake one for the other from the inside anyway.
Very true. So then the question becomes, given that:
is there, in fact, any way to prevent this process from occuring? or do we just have to cast our lots and hope for the best?
Well, we could take a page from Psamtik I's book and do some controlled experiments; unfortunately, any modern ethics committee would pitch a fit over that. So unless we've got a tame Bond villain with twenty years to kill and a passion for social science, that's out.
Realistically, our best bet seems to be rigorously characterizing the stuff that leads to semantic toxicity and developing strong social norms to avoid it. That's far from perfect, though, especially since it can easily be mistaken for (or deliberately interpreted as) silencing tactics in the current political environment.
Right. And at the moment, I'm not sure if that's even ideal. Here's something like my thinking:
In order to advance social justice (which I take as the most likely step towards maximizing global utility), we need to maximize both our compassion (aka ability to desire globally eudaimonic consequences) and our rationality (aka ability to predict and control consequences). This should be pretty straightforward to intuit; by this (admittedly simplistic) model,
Global Outcome Utility = Compassion x Rationality.
The thing is, once Rationality raises above Compassion, it makes sense to spend the next epsilon resource units on increasing Compassion, rather than increasing Rationality, until Compassion is higher than Rationality again.
Also, sometimes it's important to commit to a goal for the medium-term, to prevent thrashing. I've made a conscious effort, regarding social justice issues, to commit to a particular framework for six months, and only evaluate after that span has finished - otherwise I'm constantly course-correcting and feedback oscillations overwhelm the system.
That seems true -- if you've got the right path to maximizing global utility. Making this call requires a certain baseline level of rationality, which we may or may not possess and which we're very much prone to overestimating.
The consequences of not making the right call, or even of setting the bar too low whether or not you happen to pick the right option yourself, are dire: either stalemate due to conflicting goals, or a doomed fight against a culturally more powerful faction, or (and possibly worse) progress in the wrong direction that we never quite recognize as counterproductive, lacking the tools to do so. In any case eudaemonic improvement, if it comes, is only going to happen through random walk.
Greedy strategies tend to be fragile.