I've only ever seen publication bias taught with made-up or near-miss examples.  Has anyone got a really well-documented case in which:

* (About) nine people independently get the idea for the same experiment because it seems like it should be there, and they all see that nothing has been published on it, so they all work on it, and all get a (true) null result.

* The tenth experiment is eventually published reporting an NHST effect of about p = 0.10 

* The slow (g)rumbling of science surfaces the nine previous, unpublished versions of that experiment and someone catches it and gets it all down, with citations and dates and the specifics of whichever effect these ten people found themselves rooting around for.

 

The most representative real-world example I've seen lately has been Bem/psi, but, as a pedagogical example, I find it too distracting.  The ideal example would report on an effect that's more sympathetic, that a sharp student or outsider would say "Yeah, I'd also have thought that effect would have come through."

 

Thanks.

New Comment
2 comments, sorted by Click to highlight new comments since:

The extremely well cited Perry Preschool Project is probably a close example.

Science in cases like this is seldom clean. There's often a lot of conflicting information. If you look at the debate about priming research there are suggestions of false effects that have been found but it's far from clean.

  • (About) nine people independently get the idea for the same experiment because it seems like it should be there

People seldom do the exact same experiment. In the case of Bem, his focus on the pornographic images might very well be unique and not done by 9 other groups.