I've noticed that even on Lesswrong, there is such a thing as knowledge that it is deemed better not to know. Apparently this is referred to as the basilisk's gaze (I've yet to manage to read anything deemed dangerous here before it was deleted, so I could be wrong in the details of that).
It seems to me that a lot of the "Don't suggest that there are racial differences in IQ" position is actually based on a hidden belief that looking at the possibility of racial differences is gazing at a basilisk.
Suppose you are an employer hiring for a position, using an examination where performance is correlated with intelligence. It is essentially harmless to take the position, "My prior is that whites have higher IQs on average than blacks, so I expect the average score of the white applicants to be higher than the average score of the black applicants."
What the opponents of acknowledging racial differences are worried about is that the employer will also take the step of saying "This particular black applicant scored exceptionally well on the examination, but since I know that blacks in the aggregate have lower IQs, I'm going to treat my prior and the examination as separate bits of knowledge and scale my assessment of the candidate's intelligence downward from what the exam alone would suggest." As opposed to having the prior be swamped by the examination.
This is on top of (legitimately) expecting that the average person won't understand the difference between the layman's concept of "race" and the more scientifically rigorous concepts of "population" and "cohort."
In the wider world, unlike on Lesswrong, openly coming out and saying "Considering this idea is like gazing at a basilisk" would end disasterously. So people go with "This idea is false" instead.
I think it's very hard for people to overcome their priors here even after getting contradictory evidence. Does being a Bayesian and thinking of them only as priors really work on all levels of your mind?
Today's post, Why Are Individual IQ Differences OK? was originally published on 26 October 2007. A summary (taken from the LW wiki):
Discuss the post here (rather than in the comments to the original post).
This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was No One Knows What Science Doesn't Know, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.
Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.