I am confused by part of this. Under "Most Grant Applications are Bad," the primary piece of evidence for that assertion is that only about 25% of grant applications get funded, therefore 75% are bad. That could be because the government splits its pool of money among all the "good" ones, or it could be because there's only enough money to fund 25% of the projects. If the government decided to double its budget for psych research, would it then appear that only 50% of grant proposals are bad?
Furthermore, we should expect grants to go to those projects that show the most promise for publishing. "Publishable" does not mean "good," and publication bias is one of the biggest pathologies of modern science. This is a lousy metric.
Same here. I had to downgrade all my beliefs which were based on hard psychology textbooks. I was hardly aware of such prevalence of scientific fraud, in the field of psychology. On a side note, what about the psychological studies oft quoted in the sequences? How much can they be trusted?
Cutting away everything else, the important symptom given in the paper is I. F. If you're not doing experiments that replicate, then you aren't finding out anything. All the other symptoms are basically irrelevant, or consequences of I. F. And the central cause of I. F seems to be given down in III. B. Apparently the only standard for psychological research is that you can mathematically torture at least one correlation of p < .05 out of the data.
Well, if you've got enough factors that you're measuring, and are willing to go enough orders of analysis, you can almost certainly find a correlation that is "significant". And finding it won't actually teach you anything.
So, assuming the paper is correct on those points, the problem with psychology-as-a-science is that it collects random noise and assigns meaning to it, and teaches its students to do the same.
Within the narrow circles of our particular fields of interest, many of us learn that there are certain investigators who stand out from the herd because their findings can be trusted.
I wonder if it would be possible for psychology to "bootstrap itself" by studying these folks who produce trustworthy findings and figuring out what they're doing right.
On a related note, have there been any psychological studies of bias in researchers who study biases for their careers? I know there have been attempts at "debiasing" interventions, which have had only limited success at knocking out well-known biases. But if even the researchers who study biases fall prey to them then things really are hopeless.
Coming up with interesting ideas for psychology experiments seems pretty easy. Maybe the smart/intellectually curious folk are going into hard science?
I wonder if it's no coincidence that Feynman also chose rat psychology experiments as examples of bad science...
Well, given the linked paper quotes Feynman's "Cargo Cult Science" (sourcing it to Surely You're Joking, Mr. Feynman), I think it's safe to assume the author was familiar with Feynman's use of rat experiments as an example.
AFAIK Psychology doesn't hold the empirical findings of cogsci and econ in any particularly favorable light, so I ignore it.
Do you mean the average psychologist, the average elite academic psychologist, or what? Experimental econ is psychology, and lots of psychologists study it. I have no idea what the average psychologist thinks about supply and demand or eye tracking, though.
Spoke with several average psychologists, became concerned, then read widely cited psychology papers. I didn't see any evidence of high quality analysis. All struck me as a severe case of deformation professionelle.
Came across this article, published in 1991 but hardly dated:
David T. Lykken, What's Wrong With Psychology, Anyway? (PDF, 39 pages)
Anyone who's interested in psychology as a science might, I think, find it fascinating. Lots of stuff there about rationality-related failures of academic psychology. Several wonderful anecdotes, of which I'll quote one in full that had me laughing out loud --
(I came across the reference to the article in the HN discussion about a project, of independent interest, to try and replicate a sample of articles from three reputable journals in psychology in a given year)