Jason Mitchell is [edit: has been] the John L. Loeb Associate Professor of the Social Sciences at Harvard. He has won the National Academy of Science's Troland Award as well as the Association for Psychological Science's Janet Taylor Spence Award for Transformative Early Career Contribution.
Here, he argues against the principle of replicability of experiments in science. Apparently, it's disrespectful, and presumptively wrong.
Recent hand-wringing over failed replications in social psychology is largely pointless, because unsuccessful experiments have no meaningful scientific value.
Because experiments can be undermined by a vast number of practical mistakes, the likeliest explanation for any failed replication will always be that the replicator bungled something along the way. Unless direct replications are conducted by flawless experimenters, nothing interesting can be learned from them.
Three standard rejoinders to this critique are considered and rejected. Despite claims to the contrary, failed replications do not provide meaningful information if they closely follow original methodology; they do not necessarily identify effects that may be too small or flimsy to be worth studying; and they cannot contribute to a cumulative understanding of scientific phenomena.
Replication efforts appear to reflect strong prior expectations that published findings are not reliable, and as such, do not constitute scientific output.
The field of social psychology can be improved, but not by the publication of negative findings. Experimenters should be encouraged to restrict their “degrees of freedom,” for example, by specifying designs in advance.
Whether they mean to or not, authors and editors of failed replications are publicly impugning the scientific integrity of their colleagues. Targets of failed replications are justifiably upset, particularly given the inadequate basis for replicators’ extraordinary claims.
This is why we can't have social science. Not because the subject is not amenable to the scientific method -- it obviously is. People are conducting controlled experiments and other people are attempting to replicate the results. So far, so good. Rather, the problem is that at least one celebrated authority in the field hates that, and would prefer much, much more deference to authority.
When natural scientists attempt to replicate famous experiments where the original result was clearly correct, with what probability do they tend to succeed? Is it closer to 1 than, say, .7?
I've suggested on LW before that most attempts at physics experiments are wrong, if one counts physics students' attempts. The standard reaction to a student getting a counterintuitive result is, "well, obviously they messed up the experiment". I notice I feel OK with that response in the case of physics but don't like Mitchell trying it for psychology.
(I wonder whether biology students have to count chromosomes.)