ialdabaoth comments on LW Women- Minimizing the Inferential Distance - Less Wrong

58 [deleted] 25 November 2012 11:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1254)

You are viewing a single comment's thread. Show more comments above.

Comment author: ialdabaoth 27 November 2012 11:43:18PM 1 point [-]

I'd rather deal with that after the primary and initial source of crazy has been removed. Otherwise, it's too easy to accidentally mistake one for the other.

Comment author: Nornagest 27 November 2012 11:45:44PM 0 points [-]

Rationalization being what it is, I suspect it'd be easy to mistake one for the other from the inside anyway.

Comment author: ialdabaoth 27 November 2012 11:48:28PM 2 points [-]

Very true. So then the question becomes, given that:

  • bare facts can be semantically poisoned
  • coalitions can be semantically poisoned
  • error-correcting processes can be semantically poisoned

is there, in fact, any way to prevent this process from occuring? or do we just have to cast our lots and hope for the best?

Comment author: Nornagest 27 November 2012 11:56:46PM *  0 points [-]

Well, we could take a page from Psamtik I's book and do some controlled experiments; unfortunately, any modern ethics committee would pitch a fit over that. So unless we've got a tame Bond villain with twenty years to kill and a passion for social science, that's out.

Realistically, our best bet seems to be rigorously characterizing the stuff that leads to semantic toxicity and developing strong social norms to avoid it. That's far from perfect, though, especially since it can easily be mistaken for (or deliberately interpreted as) silencing tactics in the current political environment.

Comment author: ialdabaoth 28 November 2012 12:06:16AM 1 point [-]

Right. And at the moment, I'm not sure if that's even ideal. Here's something like my thinking:

In order to advance social justice (which I take as the most likely step towards maximizing global utility), we need to maximize both our compassion (aka ability to desire globally eudaimonic consequences) and our rationality (aka ability to predict and control consequences). This should be pretty straightforward to intuit; by this (admittedly simplistic) model,

Global Outcome Utility = Compassion x Rationality.

The thing is, once Rationality raises above Compassion, it makes sense to spend the next epsilon resource units on increasing Compassion, rather than increasing Rationality, until Compassion is higher than Rationality again.

Also, sometimes it's important to commit to a goal for the medium-term, to prevent thrashing. I've made a conscious effort, regarding social justice issues, to commit to a particular framework for six months, and only evaluate after that span has finished - otherwise I'm constantly course-correcting and feedback oscillations overwhelm the system.

Comment author: Nornagest 28 November 2012 12:35:19AM *  1 point [-]

That seems true -- if you've got the right path to maximizing global utility. Making this call requires a certain baseline level of rationality, which we may or may not possess and which we're very much prone to overestimating.

The consequences of not making the right call, or even of setting the bar too low whether or not you happen to pick the right option yourself, are dire: either stalemate due to conflicting goals, or a doomed fight against a culturally more powerful faction, or (and possibly worse) progress in the wrong direction that we never quite recognize as counterproductive, lacking the tools to do so. In any case eudaemonic improvement, if it comes, is only going to happen through random walk.

Greedy strategies tend to be fragile.