Nornagest comments on LW Women- Minimizing the Inferential Distance - Less Wrong

58 [deleted] 25 November 2012 11:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1254)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 27 November 2012 11:32:01PM 6 points [-]

The opposite is done too, though--for instance, when one assumes there is no differences between boys and girls, then dressing girls up in pink or giving them baby dolls is seen as abetting a (sometimes emergent) conspiracy which deserves great efforts to combat

Perhaps; I think part of the issue there is that there is a political debate and a sociological engineering project, and they keep shitting all over each other.

"I think if we raise boys and girls in gender-neutral environments, their inherent gender biases will be far less noticeable" is part of the sociological engineering project.

"No! You're turning them into lesbo feminazis and fairy faggots!" is the political-debate response.

"Fuck you! I'm dressing everyone unisex and attacking everyone who doesn't!" is the political-debate counter-response.

Note that while the counter-response is crazy, it's a predictable emotional response to the prior crazy, and shouldn't be blamed on its own. My assertion is that attacking people who say "I'm dressing everyone unisex and attacking everyon who doesn't!" isn't nearly as effective as attacking the people who set them off in the first place, and hoping that they'll calm down once they're not under severe stress from people who are crazier than they do and attack them without provocation.

Does that make sense?

Comment author: Nornagest 27 November 2012 11:41:56PM *  1 point [-]

That seems reasonable if there are no endogenous incentives rewarding crazy, but that seems like a questionable assumption for any ideology once it's gotten used to having crazy in its internal ecosystem.

Comment author: ialdabaoth 27 November 2012 11:43:18PM 1 point [-]

I'd rather deal with that after the primary and initial source of crazy has been removed. Otherwise, it's too easy to accidentally mistake one for the other.

Comment author: Nornagest 27 November 2012 11:45:44PM 0 points [-]

Rationalization being what it is, I suspect it'd be easy to mistake one for the other from the inside anyway.

Comment author: ialdabaoth 27 November 2012 11:48:28PM 2 points [-]

Very true. So then the question becomes, given that:

  • bare facts can be semantically poisoned
  • coalitions can be semantically poisoned
  • error-correcting processes can be semantically poisoned

is there, in fact, any way to prevent this process from occuring? or do we just have to cast our lots and hope for the best?

Comment author: Nornagest 27 November 2012 11:56:46PM *  0 points [-]

Well, we could take a page from Psamtik I's book and do some controlled experiments; unfortunately, any modern ethics committee would pitch a fit over that. So unless we've got a tame Bond villain with twenty years to kill and a passion for social science, that's out.

Realistically, our best bet seems to be rigorously characterizing the stuff that leads to semantic toxicity and developing strong social norms to avoid it. That's far from perfect, though, especially since it can easily be mistaken for (or deliberately interpreted as) silencing tactics in the current political environment.

Comment author: ialdabaoth 28 November 2012 12:06:16AM 1 point [-]

Right. And at the moment, I'm not sure if that's even ideal. Here's something like my thinking:

In order to advance social justice (which I take as the most likely step towards maximizing global utility), we need to maximize both our compassion (aka ability to desire globally eudaimonic consequences) and our rationality (aka ability to predict and control consequences). This should be pretty straightforward to intuit; by this (admittedly simplistic) model,

Global Outcome Utility = Compassion x Rationality.

The thing is, once Rationality raises above Compassion, it makes sense to spend the next epsilon resource units on increasing Compassion, rather than increasing Rationality, until Compassion is higher than Rationality again.

Also, sometimes it's important to commit to a goal for the medium-term, to prevent thrashing. I've made a conscious effort, regarding social justice issues, to commit to a particular framework for six months, and only evaluate after that span has finished - otherwise I'm constantly course-correcting and feedback oscillations overwhelm the system.

Comment author: Nornagest 28 November 2012 12:35:19AM *  1 point [-]

That seems true -- if you've got the right path to maximizing global utility. Making this call requires a certain baseline level of rationality, which we may or may not possess and which we're very much prone to overestimating.

The consequences of not making the right call, or even of setting the bar too low whether or not you happen to pick the right option yourself, are dire: either stalemate due to conflicting goals, or a doomed fight against a culturally more powerful faction, or (and possibly worse) progress in the wrong direction that we never quite recognize as counterproductive, lacking the tools to do so. In any case eudaemonic improvement, if it comes, is only going to happen through random walk.

Greedy strategies tend to be fragile.