You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

[Link] Why Science Is Not Necessarily Self-Correcting

6 Post author: ChristianKl 13 October 2014 01:51PM

Why Science Is Not Necessarily Self-Correcting John P. A. Ioannidis

The ability to self-correct is considered a hallmark of science. However, self-correction does not always happen to scientific evidence by default. The trajectory of scientific credibility can fluctuate over time, both for defined scientific fields and for science at-large. History suggests that major catastrophes in scientific credibility are unfortunately possible and the argument that “it is obvious that progress is made” is weak. Careful evaluation of the current status of credibility of various scientific fields is important in order to understand any credibility deficits and how one could obtain and establish more trustworthy results. Efficient and unbiased replication mechanisms are essential for maintaining high levels of scientific credibility. Depending on the types of results obtained in the discovery and replication phases, there are different paradigms of research: optimal, self-correcting, false nonreplication, and perpetuated fallacy. In the absence of replication efforts, one is left with unconfirmed (genuine) discoveries and unchallenged fallacies. In several fields of investigation, including many areas of psychological science, perpetuated and unchallenged fallacies may comprise the majority of the circulating evidence. I catalogue a number of impediments to self-correction that have been empirically studied in psychological science. Finally, I discuss some proposed solutions to promote sound replication practices enhancing the credibility of scientific results as well as some potential disadvantages of each of them. Any deviation from the principle that seeking the truth has priority over any other goals may be seriously damaging to the self-correcting functions of science.

 

 

Comments (17)

Comment author: buybuydandavis 13 October 2014 06:54:56PM 15 points [-]

Don't know if it was apparent to everyone else, but it wasn't apparent to me that the bolded title was also a link.

Comment author: shminux 13 October 2014 10:44:31PM 2 points [-]

The article seems to be heavily biased towards psychology. I wonder if the "harder" sciences like physics, chemistry and biology suffer from the same issues to a similar degree.

Comment author: Salemicus 14 October 2014 01:11:50PM 5 points [-]

The author of the article, Ioannidis, has published extensively about the unreliability of reported medical and biochemical results, over a more than 10 year period. The article is not so much "biased" towards psychology, as focusing on that one area.

Comment author: shminux 14 October 2014 05:07:25PM 2 points [-]

Right, "focusing" is a better description. But I wonder if this focusing resulted in a generalization which is a bit too sweeping. The "publish or perish" race is certainly everywhere in academia, but its side effects might be better mitigated in some areas than in others.

Comment author: RichardKennaway 14 October 2014 12:05:28PM 5 points [-]

I think of the work on blue LEDs that recently got the physics Nobel.

Blue LEDs work. You can buy them off the shelf. Each one works pretty much every time.

Is there anything in sociology or psychology of which the same can be said?

Comment author: satt 15 October 2014 12:20:31AM 2 points [-]

Blue LEDs work. You can buy them off the shelf. Each one works pretty much every time.

Is there anything in sociology or psychology of which the same can be said?

Depends on whether "Each one works pretty much every time" means a phenomenon which works on pretty much every individual on pretty much every occasion, or a phenomenon which can simply be replicated reliably given a big enough sample.

I can think of nothing in sociology or psychology satisfying the former criterion. But the latter, weaker criterion seems to be satisfied by anchoring bias, which was replicated by 36 sites out of 36 in the Many Labs project, as indicated by its table of summary statistics.

Comment author: Desrtopa 15 October 2014 03:11:04PM 3 points [-]

Whether one counts anything in psychology as satisfying the former or not, I think depends on where one draws the line between psychology and neurology. There are certainly things we've discovered about how the brain works that tell us things about the thought processes of every human, but one might argue that these fall under the purview of neurology, and not psychology.

Comment author: RichardKennaway 15 October 2014 06:15:49AM 1 point [-]

Depends on whether "Each one works pretty much every time" means a phenomenon which works on pretty much every individual on pretty much every occasion, or a phenomenon which can simply be replicated reliably given a big enough sample.

Definitely the former. Each one, every time. The world around us is filled with such things, yet when it comes to the study of anything to do with living organisms, people dismiss the idea as "physics envy", a concept which makes no more sense than "separate magisteria", and serves the same function.

Comment author: [deleted] 14 October 2014 05:50:09PM 1 point [-]

CBT?

Comment author: ChristianKl 15 October 2014 03:02:49PM 1 point [-]

CBT has a proven chance to help, but it doesn't have a 100% success rate for anything.

Comment author: CronoDAS 14 October 2014 01:30:04AM 4 points [-]

If you count medicine as a subfield of biology, people are already well aware of problems there...

Comment author: ChristianKl 13 October 2014 10:49:49PM 2 points [-]

In psychology you have the controversial replication initiative. In physics you have nobody complain about people attempting replications.

Comment author: RichardKennaway 14 October 2014 12:06:29PM 6 points [-]

In physics you have nobody complain about people attempting replications.

Perhaps that is because the stuff actually replicates.

Comment author: buybuydandavis 13 October 2014 07:05:55PM 2 points [-]

Wasn't there a recent thread exactly on the recent brouhaha in psychology over replication? Maybe even linking to this article?

Comment author: ChristianKl 13 October 2014 08:53:06PM 3 points [-]

There was a thread about some psychologists stating that the replication initiative does more harm than good.

Comment author: TheMajor 13 October 2014 04:12:45PM 0 points [-]

Finally, I discuss some proposed solutions to promote sound replication practices enhancing the credibility of scientific results

Which would these be? I skimmed through the article and found nothing beyond the standard 'truth must become more important', and I doubt if that should even be called a solution.

Comment author: satt 13 October 2014 11:02:42PM 4 points [-]

Which would these be? I skimmed through the article and found nothing beyond the standard 'truth must become more important', and I doubt if that should even be called a solution.

I guess it's these, from the last section of the main text:

Some suggestions for potential amendments that can be tested have been made in previous articles (Ioannidis, 2005; Young, Ioannidis, & Al-Ubaydli, 2008) and additional suggestions are made also by authors in this issue of Perspectives. Nosek et al. (2012) provide the most explicit and extensive list of recommended changes, including promoting paradigm-driven research; use of author, reviewer, editor checklists; challenging the focus on the number of publications and journal impact factor; developing metrics to identify what is worth replicating; crowdsourcing replication efforts; raising the status of journals with peer review standards focused on soundness and not on the perceived significance of research; lowering or removing the standards for publication; and, finally, provision of open data, materials, and workflow. Other authors are struggling with who will perform these much-desired, but seldom performed, independent replications. Frank and Saxe (2012) and Grahe et al. (2012) suggest that students in training could populate the ranks of replicators. Finally, Wagenmakers et al. (2012) repeat the plea for separating exploratory and confirmatory research and demand rigorous a priori registration of the analysis plans for confirmatory research.