Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: Izeinwinter 20 September 2014 07:17:56PM *  0 points [-]

Because an unspoken condition of employment that prospective employees must stay single is a management technique made of win.

Errh.. Not. Good lord. would you want to manage a team made up of 100% celibate men? This is not a weakspot in the law, because it's not a runaround anyone sane enough to not already be bankrupt would attempt.

It might on the margin inspire people to hire more people in their forties and fifties, - people who have had any children they are likely to have, but from the point of view of the government, that's also not a flaw, but more of a "Secondary benefit free with just legislation".

Comment author: fubarobfusco 20 September 2014 11:13:15PM *  1 point [-]

Good lord. would you want to manage a team made up of 100% celibate men?

Erm ... there's this guy in Rome who tried that ... I think they had some problems.

Comment author: Azathoth123 20 September 2014 10:07:55PM 5 points [-]

Have you ever actually worked with people on a coding project? Have you ever worked with idiots on a coding project? It's very important to know who you can trust and whose code has to be double checked. Also I have a question I need to know who I can ask to get an answer and who would simply be a waste of time.

Comment author: fubarobfusco 20 September 2014 10:44:10PM *  4 points [-]

Have you ever actually worked with people on a coding project?

Yep! I've been in the industry for fifteen years, and you've almost certainly benefited from stuff I've worked on. But you're acting hostile, so I don't care to give you any more stalker fodder.

As far as I can tell, some of the worst people I've worked with were ① the judgmental, arrogant, abusive assholes; and ② people who had been victims of said assholes, and so had taken a "heads down gotta look busy" attitude out of fear and shame, instead of a transparent, work-together attitude.

Or to put it another way, ① the people whom you can't ask questions of, because they will call you an idiot and a waste of time; and ② the people who have been called idiots and wastes of time so much that they don't ask questions when they should.

The technical incompetents are straightforward to filter out. Tests like FizzBuzz weed out the people who claim that they can code but actually cannot. It's the attitude incompetents, the collaboration incompetents, — the ones who harm other people's capability rather than amplifying it — that are more worth worrying about.

(Oh, and everyone's code has to be double-checked.)


Also, stop downvoting comments that you also respond to. That's logically inconsistent — downvoting means something doesn't belong on the site, not that you disagree with it. If it doesn't belong on the site, then responding to it and continuing the conversation also doesn't belong.

Comment author: Azathoth123 20 September 2014 09:28:58PM 2 points [-]

Are my aliefs interfering with my agentiness?

A better example is: if there are women on the team and my aliefs keep identifying them as worse coders but my beliefs tell me that women are just as good a coding as men.

In this case it very much matters whether the beliefs or aliefs are true.

Comment author: fubarobfusco 20 September 2014 10:04:42PM *  -2 points [-]

In this case it very much matters whether the beliefs or aliefs are true.

On the contrary, if you're in a situation of collaborating with someone, then it's pretty widely recognized as a bad social habit to be constantly trying to judge them. Even worse if you're judging them on their group memberships (or other generalities) rather than their actual individual performance in the collaboration!

It's impractical to build consensual working relationships with people who notice that you're treating them as inferiors. (Oh hey, are we talking about microaggressions again? Maybe ...)

Comment author: Azathoth123 20 September 2014 06:33:03PM 4 points [-]

"Implicit bias" refers to a measurable unconscious tendency to favor one group over another, even when one doesn't have any explicit beliefs justifying that favoritism. For instance, if you ask someone, "Are green weasels scarier, stinkier, or otherwise less pleasant than blue weasels?" and they (honestly) say that they do not believe so ... but when you look at their behavior, on average they choose to sit further away from green weasels on the bus, that could be described as implicit bias. They claim that they are not repelled by green weasels, but they measurably act like they are.

The problem with this definition is that it's very possible for someone's explicit beliefs to be false and "implicit beliefs" to be true. Thus it is problematic to call this a "bias" without establishing that the underlying beliefs are false.

"Microaggression" strikes me as an epicycles attempting to rescue the theory that race and gender don't correlate with anything. Original theory: all these differences are due to differences in the way society treats these people, so we bad treating them differently and even implement laws requiring preferential treatment. However, the achievement gaps remain, they can't be due to innate differences because that would be racist and sexist, hence they must be because t̶h̶o̶s̶e̶ ̶e̶v̶i̶l̶ ̶w̶i̶t̶c̶h̶e̶s̶ ̶a̶r̶e̶ ̶c̶u̶r̶s̶i̶n̶g̶ ̶t̶h̶e̶m̶ these evil white men are engaging in microaggressions.

Comment author: fubarobfusco 20 September 2014 09:01:33PM *  0 points [-]

The problem with this definition is that it's very possible for someone's explicit beliefs to be false and "implicit beliefs" to be true. Thus it is problematic to call this a "bias" without establishing that the underlying beliefs are false.

A different approach: Are my aliefs interfering with my agentiness? For instance, if I'm trying to get a project done with a team of programmers, and my aliefs keep identifying the women on my team as "mothers" instead of as "coders" (or more generally "workers"), that might interfere with my ability to usefully work with them towards my explicit goal.

In other words, even if it is true that those women could very well be or become mothers, in the context of deliberately pursuing a goal involving writing code, that isn't pertinent. (It's not as if they're choosing to flash their motherliness at me!) The implicit association of "woman" with "mother" and not "worker" might be encumbering me from being as agenty as I would like to be.


I'm having difficulty reconciling your comment about microaggressions with how I hear the term used, to the extent that I don't think we're talking about the same thing at all. I'm reminded of Davidson on beavers, as cited by Eliezer here.

Comment author: ShardPhoenix 20 September 2014 06:16:34AM *  2 points [-]

For example, here is an informal writeup of a PNAS article finding evidence of bias favouring male over female job applicants when everything about the applications was exactly the same apart from the name.

That's not necessarily irrational in general. The other information on the resume does not prevent the name from also providing potentially relevant information.

Comment author: fubarobfusco 20 September 2014 07:38:43AM 7 points [-]

I'd suggest you look up "screening off" in any text on Bayesian inference. The explanation on the wiki is not really the greatest.

But when you have information that is closer and more specific to the property you're trying to predict, you should expect to increasingly disregard information that is further from it. Even if your prior asserts that sex predicts competence, when you have more direct measures of competence of a particular candidate, they should screen off the less-direct one in your prior.

Comment author: shminux 20 September 2014 01:37:36AM 6 points [-]

I think that "microaggression" is a poor term, it adds negative connotation and restricted usage to standard, if subconsciously biased, human behaviors. The article uses another one, "implicit bias", which has exact same meaning but without the baggage.

Comment author: fubarobfusco 20 September 2014 05:19:20AM *  9 points [-]

In my experience, "implicit bias" and "microaggression" aren't used to refer to the exact same things — although I can see the analogy.

"Implicit bias" refers to a measurable unconscious tendency to favor one group over another, even when one doesn't have any explicit beliefs justifying that favoritism. For instance, if you ask someone, "Are green weasels scarier, stinkier, or otherwise less pleasant than blue weasels?" and they (honestly) say that they do not believe so ... but when you look at their behavior, on average they choose to sit further away from green weasels on the bus, that could be described as implicit bias. They claim that they are not repelled by green weasels, but they measurably act like they are.

We might link implicit bias to Gendler's concept of alief, or to Kahneman's concept of a System 1 response.

"Microaggression" describes a social exchange that — without deliberately attacking or insulting a group — reinforces negative stereotypes about that group, or an assumption that the group is lower-status or beneath consideration. A few examples:

  • Acting surprised that a person you meet does not match a stereotype reinforces the idea that the stereotype is normal or expected.
  • Telling jokes that depend on having a particular perspective reinforces the idea that this perspective is expected and that people in the conversation who lack it are outsiders.
  • Complimenting someone on their deviation from a negative stereotype may sound positive to people who are not targeted by that stereotype, but still often comes across as an insult. ("He's so pretty for a green weasel!" implies that you expect green weasels to not be pretty.)

The thing that "microaggression" and "implicit bias" have in common is that they're unintentional, and even unrecognized, by the person doing them. A microaggression is a specific act, though, whereas implicit bias is a measured aggregate tendency.

Comment author: Lumifer 18 September 2014 03:28:50PM 1 point [-]

Just a nitpick, but it should be y'all's not ya'll's.

Technically speaking, shouldn't it be all y'all's since it's plural? X-D

Comment author: fubarobfusco 18 September 2014 09:57:22PM 1 point [-]
Comment author: fubarobfusco 18 September 2014 01:30:28AM *  9 points [-]

One of boyd's examples is a pretty straightforward feedback loop, recognizable to anyone with the slightest degree of systems engineering:

Consider, for example, what’s happening with policing practices, especially as computational systems allow precincts to distribute their officers “fairly.” In many jurisdictions, more officers are placed into areas that are deemed “high risk.” This is deemed to be appropriate at a societal level. And yet, people don’t think about the incentive structures of policing, especially in communities where the law is expected to clear so many warrants and do so many arrests per month. When they’re stationed in algorithmically determined “high risk” communities, they arrest in those communities, thereby reinforcing the algorithms’ assumptions.

This system — putting more crime-detecting police officers (who have a nontrivial false-positive rate) in areas that are currently considered "high crime", and shifting them out of areas currently considered "low crime" — diverges under many sets of initial conditions and incentive structures. You don't even have to posit racism or classism to get these effects (although those may contribute to failing to recognize them as a problem); under the right (wrong) conditions, as t → ∞, the noise (that is, the error in the original believed distribution of crime) dominates the signal.

The ninth of Robert Peel's principles of ethical policing is surprisingly relevant: "To recognise always that the test of police efficiency is the absence of crime and disorder, and not the visible evidence of police action in dealing with them." [1]

Comment author: Azathoth123 17 September 2014 12:48:34AM 5 points [-]

Agreed, I would argue that at this point the word "racism" has no coherent meaning, whether it ever had a coherent meaning is open to debate.

Comment author: fubarobfusco 17 September 2014 06:42:43AM *  7 points [-]

As with many other words — such as "liberal" and "set" — it has rather a lot of meanings and if you are either ① unsure of which one someone means, or ② think you know which one someone means but that meaning makes their sentence ridiculously false, then you are better off asking for clarification than guessing.

The problem is not that "racism" has no coherent meaning. No word carries inherent meaning; and many words quite safely carry multiple or ambiguous meanings without causing problems, because hearers don't panic and throw elementary principles of decent communication out the window when they hear them.

When someone says "set" and a hearer isn't sure whether they mean "set" in the Zermelo-Fraenkel sense or the game sense, the hearer typically asks.

But when someone says "racism", many hearers are likely to react incredibly poorly, even exhibiting the physiological responses of a person who is threatened or becoming enraged.

We might better ask, "Why do they respond so badly to this particular word?" I suspect the answer has a lot to do with fear of being accused of something vile. And I suggest that the poor rationality practice is at least as much on the part of hearers who let this reaction run away with them instead of finding out what is meant, as on the part of speakers who use the word without further explanation.

Comment author: fubarobfusco 16 September 2014 07:20:39AM *  3 points [-]

[Please read the OP before voting. Special voting rules apply.]

Improving the typical human's emotional state — e.g. increasing compassion and reducing anxiety — is at least as significant to mitigating existential risks as improving the typical human's rationality.

The same is true for unusually intelligent and capable humans.

For that matter, unusually intelligent and capable humans who hate or fear most of humanity, or simply don't care about others, are unusually likely to break the world.

(Of course, there are cases where failures of rationality and failures of compassion coincide — the fundamental attribution error, for instance. It seems to me that attacking these problems from both System 1 and System 2 will be more effective than either approach alone.)

View more: Next