One Study, Many Results (Matt Clancy)
I didn't see this post, its author, or the study involved elsewhere on LW, so I'm crossposting the content. Let me know if this is redundant, and I'll take it down. Summary This post looks at cases where teams of researchers all began with the same data, then used it to answer a question — and got a bunch of different answers, based on their different approaches to statistical testing, "judgment calls", etc. This shows the difficulty of doing good replication work even without publication bias; none of the teams here had any special incentive to come up with a certain result, and they all seemed to be doing their best to really answer the question. Also, I'll copy the conclusion of the post and put it here: > More broadly, I take away three things from this literature: > > 1. Failures to replicate are to be expected, given the state of our methodological technology, even in the best circumstances, even if there’s no publication bias. > 2. Form your ideas based on suites of papers, or entire literatures, not primarily on individual studies. > 3. There is plenty of randomness in the research process for publication bias to exploit. More on that in the future. The post Science is commonly understood as being a lot more certain than it is. In popular science books and articles, an extremely common approach is to pair a deep dive into one study with an illustrative anecdote. The implication is that’s enough: the study discovered something deep, and the anecdote made the discovery accessible. Or take the coverage of science in the popular press (and even the academic press): most coverage of science revolves around highlighting the results of a single new (cool) study. Again, the implication is that one study is enough to know something new. This isn’t universal, and I think coverage has become more cautious and nuanced in some outlets during the era of covid-19, but it’s common enough that for many people “believe science” is a sincere mantra, as if science m
I've been enjoying the Sold a Story podcast, which explains how many schools stopped teaching kids to read over the last few decades, replacing phonics with an unscientific theory that taught kids to pretend to read (cargo cult vibes). It features a lot of teachers and education scholars who come face-to-face with evidence that they've been failing kids, and respond in many different ways — from pro-phonics advocacy and outright apology to complete refusal to engage. I especially liked one teacher musing on how disconcerting it was to realize her colleagues were "refuse to engage" types.
The relatable topic and straightforward reporting make the podcast very accessible. It's a good way to share a story with people outside the LessWrong bubble that may get them angry in a way that supports rationalist virtues.