Simonsohn, a social scientist, investigates bad use of statistics in his field.

A few good quotes:

The three social psychologists set up a test experiment, then played by current academic methodologies and widely permissible statistical rules. By going on what amounted to a fishing expedition (that is, by recording many, many variables but reporting only the results that came out to their liking); by failing to establish in advance the number of human subjects in an experiment; and by analyzing the data as they went, so they could end the experiment when the results suited them, they produced a howler of a result, a truly absurd finding. They then ran a series of computer simulations using other experimental data to show that these methods could increase the odds of a false-positive result—a statistical fluke, basically—to nearly two-thirds.

Laugh or cry?:"He prefers psychology’s close-up focus on the quirks of actual human minds to the sweeping theory and deduction involved in economics."

Last summer, not long after Sanna and Smeesters left their respective universities, Simonsohn laid out his approach to fraud-busting in an online article called “Just Post It: The Lesson From Two Cases of Fabricated Data Detected by Statistics Alone”. Afterward, his inbox was flooded with tips from strangers. People wanted him to investigate election results, drug trials, the work of colleagues they’d long doubted. He has not replied to these messages. Making a couple of busts is one thing. Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.

This looks like a clue that there's work available for anyone who knows statistics. Eventually, there will be an additional line of work for how to tell whether a forensic statistician is competent.

 

New Comment
14 comments, sorted by Click to highlight new comments since:
[-]TimS470

Dear Journal of Negative Results,

Why don't you exist in every field? Why aren't you more prestigious?

Sincerely,

Everyone who thinks science is not attire.

P.S. Props to the Journal of Negative Results in BioMedicine

there's work available for anyone who knows statistics.

Work that will make (amended) many influential editors, scientists and institutions hate you. And that won't advance you on the normal career track of publications and research.

Fair enough. Do you know whether there are so many people who know statistics that some of them are unlikely to have access to a normal career track?

There may be several problems here. These are my guesses, without special/private data to back them up. I'll use psychology as the name of the field, but this isn't specific to psychology.

  1. Lots of people know statistics, but to criticize the statistics of psychology articles, one would be well served to also be familiar with the subject matter of the field. A pure statistician criticizing psychologists, looking in from the outside as it were, would probably come across as arrogant or offensive, and wouldn't be listened to by psychologists much. Whereas if the statistician is also a psychologist, perhaps primarily a psychologist but also a good statistician, they don't really want to make a career out of criticizing and disproving other psychologists.

  2. When you point out a problem with someone else's article, your criticism is judged by third parties in the field. If the problems relate to psychology itself, then presumably other psychologists will correctly evaluate your criticism. But if the problems relate to something external like statistics - and we stipulate here that most psychologists don't understand these problems themselves, because we're talking about common problems - then they won't be able to judge your criticism on its own terms.

    Because people must still reply to officially published criticism, they will be forced to say explicitly that they disagree. And because they will be unable to justify their disagreement on technical statistical grounds, they will rationalize it as politics or a status game. So the likely result would be that people dislike you personally, and by extension your theories, for producing criticism that they don't understand, or that they can't disprove but also can't admit as valid.

    Ideal scientists accept valid criticism, change their minds, and say they were wrong. But even ideal scientists can't do so if they don't really understand the criticism. They are using statistics incorrectly, because they were taught to do so by authority figures or by imitating authoritative works, and only another authoritative figure can effectively tell them they are wrong.

  3. Once this situation exists it is self perpetuating. People who understand and care about statistics will observe the state of the field, and will tend not to become psychologists. People who want to reference widely accepted articles and theories, in order to advance their careers, will thereby absorb some wrong statistics and replicate them in their own work.

Long ago I read an essay by a psychology grad student claiming that it was a routine matter for statisticians to publish articles identifying a particular statistical error and listing a hundred psychology papers in which it occurred. Whenever this happened, the psychologists would immediately check the bibliography to see if they were tagged. They accepted the authority of the statisticians and were quite embarrassed to be caught, but this did not appear to affect how they did experiments or wrote papers, except perhaps avoiding already identified errors.

Can an academic psychologist comment on what happens?
Whether statisticians routinely publish criticism of psychologists that is noticed and accepted seems a pretty concrete fact that everyone should agree on, unlike the dynamics that Dan speculates, which could be opaque to those engaging in them.

Maybe this isn't a job for anyone who knows statistics, it's a job for research psychologists who know statistics and have found they're too pugnacious to be happy in a conventional academic career.

That would still not help them make their criticism, based on technical statistical grounds, understood or accepted among other psychologists who are poorly trained in statistics.

Note the last paragraph I quoted-- Simonsohn's inbox is full of tips about iffy research of many kinds.

There are two issues: having influence within a profession (difficult) and getting paid (not obvious, but possibly easier than having influence). The path isn't as easy as I thought. Perhaps the best route is looking for a job teaching good statistical practice.

If psychologists want and accept being taught good statistical practice, as Douglas_Knight suggested, then that seems likely to work.

What's the way out?

Assuming the mantle of the social sciences’ full-time Grand Inquisitor would be quite another.

Maybe we need a nonprofit that takes the role of playing the Grand Inquisitor?

There are sadly too many such cases in the fields of psychology.