That's the obvious brute force solution, but a possibly more elegant route is just to have an international trials register. This suggestion has been around for a while, and should be significantly less costly (and controversial) than the pre-commit to publishing route while still giving some useful tools for checking on things like publication bias, double publication, etc.
Scrutinize claims of scientific fact in support of opinion journalism.
Even with honest intent, it's difficult to apply science correctly, and it's rare that dishonest uses are punished. Citing a scientific result gives an easy patina of authority, which is rarely scratched by a casual reader. Without actually lying, the arguer may select from dozens of studies only the few with the strongest effect in their favor, when the overall body of evidence may point at no effect or even in the opposite direction. The reader only sees "statistically significant evidence for X". In some fields, the majority of published studies claim unjustified significance in order to gain publication, inciting these abuses.
Here are two recent examples:
- Susan Pinker, a psychologist, in NYT's "DO Women Make Better Bosses"
- Megan McArdle, linked from the LW article The Obesity Myth
Mike, a biologist, gives an exasperated explanation of what heritability actually means:
Susan Pinker's female-boss-brain cheerleading is refuted by Gabriel Arana. A specific scientific claim Pinker makes ("the thicker corpus callosum connecting women's two hemispheres provides a swifter superhighway for processing social messages") is contradicted by a meta-analysis (Sex Differences in the Human Corpus Callosum: Myth or Reality?), and without that, you have only just-so evolutionary psychology argument.
The Bishop and Wahlsten meta-analysis claims that the only consistent finding is for slightly larger average whole brain size and a very slightly larger corpus callosum in adult males. Here are some highlights:
Obviously, if journals won't publish negative results, then this weakens the effective statistical significance of the positive results we do read. The authors don't find this to be significant for the topic (the above complaint isn't typical).
This effect is especially notable in media coverage of health and diet research.
This is disturbing. I suspect that many authors are hesitant to subject themselves to the sort of scrutiny they ought to welcome.
This is either rank incompetence, or even worse, the temptation to get some positive result out of the costly data collection.