Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: LauraABJ 06 December 2009 02:03:10AM *  4 points [-]

If we assume that approximately p% of results are false positives, and that only positive results are published, then the question becomes how many scientists are trying to prove (and disprove) the same hypothesis. If 1000 scientists are trying to prove that Drug Y slows the progression of Alzheimer's disease, and a p of 0.01 is required for publication, then we need to see more than 10 independent publications supporting this result before we should believe it. Things would be so much easier if negative results were given as much weight as positive ones... Can anyone think of a good way of calibrating the publication bias towards positives?

Comment author: rps 07 December 2009 04:05:37PM 3 points [-]

This is what they do in the wretched hive of scum and villainly that is medical research: http://www.cochrane-net.org/openlearning/HTML/mod15-3.htm