Came across this article, published in 1991 but hardly dated:

David T. Lykken, What's Wrong With Psychology, Anyway? (PDF, 39 pages)

Anyone who's interested in psychology as a science might, I think, find it fascinating. Lots of stuff there about rationality-related failures of academic psychology. Several wonderful anecdotes, of which I'll quote one in full that had me laughing out loud --

In the 1940s and ’50s. there was a torrent of interest and research surrounding the debate between the S-R [Stimulus-Response] reinforcement theorists at Yale and Iowa City and the S-S [Stimulus-Stimulus] expectancy theorists headquartered at Berkeley. As is usual in these affairs, the two sides produced not only differing theoretical interpretations but also different empirical findings from their rat laboratories, differences that ultimately led Marshall Jones to wonder if the researchers in Iowa and California might not be working with genetically different animals. Jones obtained samples of rats from the two colonies: and tested them in the simple runway situation. Sure enough, when running time was plotted against trial number, the two strains showed little overlap in performance. The Iowa rats put their heads down and streaked for the goal box, while the Berkeley animals dawdled, retraced, investigated, appeared to be making “cognitive maps” just as Tolman always said. But by 1965 the torrent of interest in latent-learning had become a backwater and Jones's paper was published obscurely (Jones & Fennel, 1965).

(I came across the reference to the article in the HN discussion about a project, of independent interest, to try and replicate a sample of articles from three reputable journals in psychology in a given year)

New Comment
15 comments, sorted by Click to highlight new comments since:

I am confused by part of this. Under "Most Grant Applications are Bad," the primary piece of evidence for that assertion is that only about 25% of grant applications get funded, therefore 75% are bad. That could be because the government splits its pool of money among all the "good" ones, or it could be because there's only enough money to fund 25% of the projects. If the government decided to double its budget for psych research, would it then appear that only 50% of grant proposals are bad?

Furthermore, we should expect grants to go to those projects that show the most promise for publishing. "Publishable" does not mean "good," and publication bias is one of the biggest pathologies of modern science. This is a lousy metric.

And can we even trust the government to choose the best grant proposals?

Same here. I had to downgrade all my beliefs which were based on hard psychology textbooks. I was hardly aware of such prevalence of scientific fraud, in the field of psychology. On a side note, what about the psychological studies oft quoted in the sequences? How much can they be trusted?

[-]TimS60

Why stop at psychology? Lots of scientific papers are retracted.

[-]see30

Cutting away everything else, the important symptom given in the paper is I. F. If you're not doing experiments that replicate, then you aren't finding out anything. All the other symptoms are basically irrelevant, or consequences of I. F. And the central cause of I. F seems to be given down in III. B. Apparently the only standard for psychological research is that you can mathematically torture at least one correlation of p < .05 out of the data.

Well, if you've got enough factors that you're measuring, and are willing to go enough orders of analysis, you can almost certainly find a correlation that is "significant". And finding it won't actually teach you anything.

So, assuming the paper is correct on those points, the problem with psychology-as-a-science is that it collects random noise and assigns meaning to it, and teaches its students to do the same.

The replication examples (like the schizophrenia one) are pretty interesting.

Within the narrow circles of our particular fields of interest, many of us learn that there are certain investigators who stand out from the herd because their findings can be trusted.

I wonder if it would be possible for psychology to "bootstrap itself" by studying these folks who produce trustworthy findings and figuring out what they're doing right.

On a related note, have there been any psychological studies of bias in researchers who study biases for their careers? I know there have been attempts at "debiasing" interventions, which have had only limited success at knocking out well-known biases. But if even the researchers who study biases fall prey to them then things really are hopeless.

Coming up with interesting ideas for psychology experiments seems pretty easy. Maybe the smart/intellectually curious folk are going into hard science?

The Hacker News link is broke.

Fixed, thank you.

I wonder if it's no coincidence that Feynman also chose rat psychology experiments as examples of bad science...

[-]see20

Well, given the linked paper quotes Feynman's "Cargo Cult Science" (sourcing it to Surely You're Joking, Mr. Feynman), I think it's safe to assume the author was familiar with Feynman's use of rat experiments as an example.

AFAIK Psychology doesn't hold the empirical findings of cogsci and econ in any particularly favorable light, so I ignore it.

Do you mean the average psychologist, the average elite academic psychologist, or what? Experimental econ is psychology, and lots of psychologists study it. I have no idea what the average psychologist thinks about supply and demand or eye tracking, though.

Spoke with several average psychologists, became concerned, then read widely cited psychology papers. I didn't see any evidence of high quality analysis. All struck me as a severe case of deformation professionelle.