have journals approve study designs for publication in advance, including all statistical tools to be used; and then you do the study and run the preselected analysis and publish the results, regardless of whether positive or negative
Brilliant.
Maybe a notary service for such plans would become popular from the ground up. Of course, to get voluntary adoption, you'd have to implement a guarantee of secrecy for a desired time period (even though the interests of science would be best served by early publicity, those scientists want their priority).
Let's see, just the right protocol for signing/encrypting, and ... never mind, it will never be used until some high status scientists want to show off ;)
Scrutinize claims of scientific fact in support of opinion journalism.
Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.
Here are two recent examples:
- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"
- Megan McArdle, linked from the LW article The Obesity Myth
Mike, a biologist, gives an exasperated explanation of what heritability actually means:
Susan Pinker's female-boss-brain cheerleading is refuted by Gabriel Arana. A specific scientific claim Pinker makes ("the thicker corpus callosum connecting women's two hemispheres provides a swifter superhighway for processing social messages") is contradicted by a meta-analysis (Sex Differences in the Human Corpus Callosum: Myth or Reality?), and without that, you have only just-so evolutionary psychology argument.
The Bishop and Wahlsten meta-analysis claims that the only consistent finding is for slightly larger average whole brain size and a very slightly larger corpus callosum in adult males. Here are some highlights:
Obviously, if journals won't publish negative results, then this weakens the effective statistical significance of the positive results we do read. The authors don't find this to be significant for the topic (the above complaint isn't typical).
This effect is especially notable in media coverage of health and diet research.
This is disturbing. I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.
This is either rank incompetence, or even worse, the temptation to get some positive result out of the costly data collection.