Okay, I'm reading the article now. I am no expert in this area, but it seems to just be wrong.
First, it is patently false that "heritability says nothing about how much the over-all level of the trait is under genetic control." Heritability is defined in a way that is designed to tell you how much of the trait is under genetic control. That's its purpose. It's not a perfect measure, but it's wrong to say that it tells you nothing about what it's designed to tell you something about.
I expect the textbook example of heritability of number of arms being misleading is a textbook example of when heritability measurements go wrong, not a textbook example of what heritability is supposed to measure.
The author's argument is that heritability is variance associated with different genotypes over total variance; all members of the population have different genes; therefore, everything has 100% heritability. Furthermore, the author goes on to say, there are interactions between genetics and environment, and other factors that are correlated with genetics, and so your heritability measurement isn't meaningful anyway.
This is wrong, for several reasons:
It would require psychologists to sequence the DNA of their subjects.
If it were correct, psychologists would eventually notice that everything had 100% heritability.
Psychologists design experiments measuring heritability so that some pairs in the population share more genes than other pairs.
Psychologists design experiments to try to control for those other factors correlated with genetics. If they don't, that's a design flaw.
I don't think the author is really saying that people are misunderstanding the technical definition of 'heritability'. He is saying that all of the studies of IQ have been poorly designed, and so didn't measure actual heritability.
The web page linked to seems to be politically-motivated, to show that IQ is not genetic. I also note that I read half of the book he refers to, which was written in response to The Bell Curve, and as science it was a lousy book. My recollection is that it was long on moralizing and attempts to create associations between The Bell Curve and Bad Things; but was not good at finding errors in the book it condemned so vigorously. It was also motivated by the same politics. It reminded me of what Einstein said when 30 Nazi scientists wrote a book against Relativity: "If they had been right, it would have taken only one scientist."
Godwin's Law! I win!
I think I can even call "large group of eminent scientists write a politically-motivated but scientifically weak book refuting another book" a trope, since the same thing happened with the "Against Sociobiology" letter of Gould etc.
Scrutinize claims of scientific fact in support of opinion journalism.
Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.
Here are two recent examples:
- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"
- Megan McArdle, linked from the LW article The Obesity Myth
Mike, a biologist, gives an exasperated explanation of what heritability actually means:
Susan Pinker's female-boss-brain cheerleading is refuted by Gabriel Arana. A specific scientific claim Pinker makes ("the thicker corpus callosum connecting women's two hemispheres provides a swifter superhighway for processing social messages") is contradicted by a meta-analysis (Sex Differences in the Human Corpus Callosum: Myth or Reality?), and without that, you have only just-so evolutionary psychology argument.
The Bishop and Wahlsten meta-analysis claims that the only consistent finding is for slightly larger average whole brain size and a very slightly larger corpus callosum in adult males. Here are some highlights:
Obviously, if journals won't publish negative results, then this weakens the effective statistical significance of the positive results we do read. The authors don't find this to be significant for the topic (the above complaint isn't typical).
This effect is especially notable in media coverage of health and diet research.
This is disturbing. I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.
This is either rank incompetence, or even worse, the temptation to get some positive result out of the costly data collection.