Eugine_Nier comments on Don't Get Offended - LessWrong

32 Post author: katydee 07 March 2013 02:11AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (588)

You are viewing a single comment's thread. Show more comments above.

Comment author: faul_sname 08 March 2013 08:48:04PM 3 points [-]

I could make similar argument about a lot of things we do here, e.g., people hear "consequentialism" and think "the ends justify the means", that doesn't stop LW from promoting consequentialism.

Nope, and some people will express disapproval of LWers who promote consequentialism. Being right doesn't make you immune to social stigma.

Intentionally believing false things always carries a cost.

Yes, it does. So does unintentionally believing false things. This is definitely not a one-sided issue, as much as people like to pretend that is it. Anti-discrimination policies reduce one cost at the expense of raising another.

For example, suppose I want to hire the best mathematicians for a project, they'll likely be disproportionately White or Asian men.

In the case that you both want to hire and are able to hire exceptional mathematicians, anti-discrimination policies are likely to hurt both parties involved. (In theory, laws regarding disparate impact wouldn't actually affect you if you were hiring based on demonstrable mathematical prowess, but in practice business necessity would be hard to prove). The mathematicians are actually likely to be hurt considerably more, because without anti-discrimination policies, they would probably be in higher demand and thus able to ask for much higher pay.

The real problem comes in when employers decide that they need exceptional people but can't actually identify these exceptional people. If filtering based on race was allowed, employers would use that (the best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I'll get an above-average mathematician).

Basically, you're right except for the problem where humans mix up p(a|b) and p(b|a), which causes people to do stupid things (most of the people who win the lottery buy lots of tickets, so if I buy lots of tickets I'm likely to win the lottery). If you actually know what you're hiring based on, anti-discrimination policies will prevent you from having 100% of your workforce be the very best, but even if only whites and asians had the required skills, you're still looking at 77% of the population in the US, so it falls in the category of "annoyance" not "business killer". In terms of fudging, you can detect statistically significant deviations just as well as someone looking at your hiring data. You don't need to know beforehand.

Of course, if these things weren't the case you'd still face social stigma for saying anything that sounds vaguely racist. Because while these two societal tendencies have strong effects in opposite directions, they're not there by virtue of reasoned argument, and so removing one but not the other is likely to cause more harm than good (probably, I have no idea how one would go about removing either societal tendency to test that hypothesis). If both tendencies could be eliminated, that would be best, and here you probably can talk about it without much social stigma, but if you ask those questions in everyday life, you will be labeled as a racist.

Comment author: Eugine_Nier 09 March 2013 06:44:06AM 4 points [-]

The real problem comes in when employers decide that they need exceptional people but can't actually identify these exceptional people. If filtering based on race was allowed, employers would use that (the best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I'll get an above-average mathematician).

Basically, you're right except for the problem where humans mix up p(a|b) and p(b|a),

Ironically this is a case where p(a|b) is in fact a good proxy for p(b|a) and and the kind of filtering you're objecting to is in fact the correct thing to do from a Bayesian perspective.

Comment author: wedrifid 09 March 2013 08:58:55AM 3 points [-]
Comment author: [deleted] 09 March 2013 11:17:40AM *  0 points [-]

“The best mathematicians are disproportionately white and asian, therefore if I hire a white or asian I'll get an above-average mathematician” is Bayesianly correct if the race is the only thing you know about the candidates; but it isn't (a randomly-chosen white or Asian person is very unlikely to be a decent mathematician), and the other information you have about the candidates most likely mostly screens off the information that race gives you about maths skills.

Comment author: Eugine_Nier 09 March 2013 08:05:33PM *  4 points [-]

Read the comment I linked to and possibly subsequent discussion if you're interested in these things.

Comment author: [deleted] 10 March 2013 04:07:37PM *  1 point [-]

Hmm, so E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is male) is just 4 points more than E(the Math SAT score that X deserves|the Math SAT score that X got is 800, and X is female). That doesn't sound like terribly much to me, and I'd guess there are plenty of people who, due to corrupted mindware and stuff, would treat a male who got 800 better than a female who got 800 by a much greater extent than justified by that 4-point difference in the Bayesian posterior expected values. (Cf the person who told whowhowho that Obama must be dumber than Bush -- surely we know much more about them than their races?)

Comment author: Eugine_Nier 10 March 2013 07:44:26PM *  3 points [-]

I'm not sure if this is correct, but I sometimes wonder given how they're surrounded by spin-doctors and other image manipulators how much we really know about prominent politicians, especially when the politician in question is new so you can't look at his record.