Bugmaster comments on The noncentral fallacy - the worst argument in the world? - Less Wrong

157 Post author: Yvain 27 August 2012 03:36AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (1742)

You are viewing a single comment's thread. Show more comments above.

Comment author: Bugmaster 04 October 2012 09:25:07PM *  2 points [-]

I think one possible answer is that your model of sexism, while internally consistent, is useless at best and harmful at worst, depending on how you interpret its output.

If your definition of sexism is completely orthogonal to morality, as your last bullet point implies, then it's just not very useful. Who cares if certain actions are "sexist" or "blergist" or whatever ? We want to know whether our goals are advanced or hindered by performing these actions -- i.e., whether the actions are moral -- not whether they fit into some arbitrary boxes.

On the other hand, if your definition implies that sexist actions very likely to be immoral as well, then your model is broken, since it ignores about 50% of the population. Thus, you are more likely to implement policies that harm men in order to help women; insofar as we are all members of the same society, such policies are likely to harm women in the long run, as well, due to network effects.

EDIT: Perhaps it should go without saying, but in the interests of clarity, I must point out that I have no particular desire to commit violence against anyone. At least, not at this very moment.

Comment author: TheOtherDave 04 October 2012 10:23:54PM 2 points [-]

If your definition of sexism is completely orthogonal to morality, as your last bullet point implies

It does?
Hm.
I certainly didn't intend for it to.
And looking at it now, I don't see how it does. Can you expand on that?
I mean, if I X isn't murder, it doesn't follow that X is moral... there exist immoral non-murderous acts. But in saying that, I don't imply that murder is completely orthogonal to morality.

you are more likely to implement policies that harm men in order to help women

This seems more apposite.

Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.

Not necessarily, of course... I might just happen to implement a policy that benefits everyone, but that benefits B more than A, until parity is reached. But within the set S of strategies that reduce benefit differentials, the subset S1 of strategies that also benefit everyone (or even keep benefits fixed) is relatively small, so a given S is unlikely to be in S1.

Of course, it's also true that within the set S2 of strategies that benefit everyone, S1 is also relatively small, so if my only goal is to benefit everyone it's likely I will increase benefit differentials between A and B.

What seems to follow is that if I value both overall benefits and equal access to benefits, I need to have them both as goals, and restrict my choices to S1. This ought not be surprising, though.

I must point out that I have no particular desire to commit violence against anyone

I didn't think you did. DaFranker expressed such a desire, and identified the position I described as its cause, and I was curious about that relationship (which he subsequently explained). I wasn't attributing it to anyone else.

Comment author: Bugmaster 05 October 2012 01:46:26AM 1 point [-]

And looking at it now, I don't see how it does. Can you expand on that?

You said,

  • It's not necessarily moral or valid, it's just not sexist. There exist immoral non-sexist acts.

This makes sense, but you never mentioned that sexist actions are immoral, either. I do admit that I interpreted your comment less charitably than I should have.

Yes, absolutely, if my only goal is to reduce benefit differentials between groups A and B, and A currently benefits disproportionately, then I am likely to implement policies that harm A.

Yes, and you may not even do so deliberately. You may think you're implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.

DaFranker expressed such a desire...

I think he was speaking metaphorically, but I'm not him... Anyway, I just wanted to make sure I wasn't accidentally threatening anyone.

Comment author: DaFranker 05 October 2012 01:56:31PM *  0 points [-]

DaFranker expressed such a desire...

I think he was speaking metaphorically, but I'm not him... Anyway, I just wanted to make sure I wasn't accidentally threatening anyone.

Only in part, actually. It is a faint desire, and I rarely actually bang my own head against a wall, but there is real impulse/instinct for violence coming up from somewhere in situations similar to that. It's obviously not something I act upon (I'd be in prison since long ago, considering the frequency at which it occurs).

Comment author: TheOtherDave 05 October 2012 02:32:03AM 0 points [-]

You may think you're implementing a strategy in S1, but if your model only considers people in B and not A, then you are likely to be implementing a strategy in S without realizing it.

Well, "without realizing it" is a confusing thing to say here. If I care about group A but somehow fail to realize that I've adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.

Leaving that clause aside, though, I agree with the rest of this. For example, if I simply don't care about group A, I may well adopt a strategy that harms A.

Comment author: Bugmaster 05 October 2012 06:52:06PM 0 points [-]

If I care about group A but somehow fail to realize that I've adopted a strategy that harms A, it seems I have to be exceptionally oblivious. Which happens, of course, but is an uncharitable assumption to start from.

True enough, but it's all a matter of weighing the inputs. For example, if you care about group A in principle, but are much more concerned with group B -- because they are the group that your model informs you about -- then you're liable to miss all but the most egregious instances of harm caused to group A by your actions.

By analogy, if your car has a broken headlight on the right side, then you're much more likely to hit objects on that side when driving at night. If your headlight isn't broken, but merely dim, then you're still more likely to hit objects on your right side, but less so than in the first scenario.

Comment author: TheOtherDave 05 October 2012 07:40:57PM 3 points [-]

Right, absolutely.

Indeed, many feminists make an analogous argument for why feminism is necessary... that is, that our society tends to pay more attention to men than women, and consequently disproportionately harms women without even noticing unless someone particularly calls social attention to the treatment of women. Similar arguments get made for other nominally low-status groups.

Comment author: Bugmaster 05 October 2012 08:48:14PM 0 points [-]

That's true, but, at the risk of being uncharitable, I've got to point out that reversed stupidity is not intelligence. When you notice a bias, embracing the equal and opposite bias is, IMO, a poor choice of action.

Comment author: TheOtherDave 05 October 2012 09:10:59PM 1 point [-]

Sure, in principle.

That said, at the risk of getting political, my usual reaction when I hear people complain about legislation that provides "special benefits" for queers (a common real-world idea that has some commonality with the accusation of having embraced an equal-and-opposite bias) is that the complainers don't really have a clue what they're talking about, and that the preferential bias they think they see is simply what movement towards equality looks like when one is steeped in a culture that pervasively reflects a particular kind of inequality.

And I suspect this is not unique to queers.

So, yeah, I think you're probably being uncharitable.

Comment author: Bugmaster 05 October 2012 09:21:09PM 0 points [-]

I'm not arguing against any specific implementation, but against the idea that optimal implementations could be devised by merely looking at the specific subset of the population you're interested in, and ignoring everyone else. Your (admittedly, hypothetical) definition of "sexism" upthread sounds to me like just such a model.

Comment author: TheOtherDave 05 October 2012 09:29:29PM *  0 points [-]

Hm. So, OK. What I said upthread was:

I usually model the standard feminist position as saying that the net sexism in a system is a function of the differential benefits provided to men and women over the system as a whole, and a sexist act is one that results in an increase of that differential.

You're suggesting that this definition fails to look at men?
I don't see how.
Can you clarify?