I have had the following situation happen several times during my research career: I write code to analyze data; there is some expectation about what the results will be; after running the program, the results are not what was expected; I go back and carefully check the code to make sure there are no errors; sometimes I find an error
No matter how careful you are when it comes to writing computer code, I think you are more likely to find a mistake if you think there is one. Unexpected results lead one to suspect a coding error more than expected results do.
In general, researchers usually do have general expectations about what they will find (e.g., the drug will not increase risk of the disease; the toxin will not decrease risk of cancer).
Consider the following graphic:
Here, the green region is consistent with what our expectations are. For example, if we expect a relative risk (RR) of about 1.5, we might not be too surprised if the estimated RR is between (e.g.) 0.9 and 2.0. Anything above 2.0 or below 0.9 might make us highly suspicious of an error -- that's the red region. Estimates in the red region are likely to trigger serious coding error investigation. Obviously, if there is no coding error then the paper will get submitted with the surprising results.
Error scenarios
Let's assume that there is a coding error that causes the estimated effect to differ from the true effect (assume sample size large enough to ignore sampling variability).
Consider the following scenario:
Type A. Here, the estimated value is biased, but it's within the expected range. In this scenario, error checking is probably more casual and less likely to be successful.
Next, consider this scenario:
Type B. In this case, the estimated value is in the red zone. This triggers aggressive error checking of the type that has a higher success rate.
Finally:
Type C. In this case it's the true value that differs from our expectations. However, the estimated value is about what we would expect. This triggers casual error checking of the less-likely-to-be-successful variety.
If this line of reasoning holds, we should expect journal articles to contain errors at a higher rate when the results are consistent with the authors' prior expectations. This could be viewed as a type of confirmation bias.
How common are programming errors in research?
There are many opportunities for hard-to-detect errors to occur. For large studies, there might be hundreds of lines of code related to database creation, data cleaning, etc., plus many more lines of code for data analysis. Studies also typically involve multiple programmers. I would not be surprised if at least 20% of published studies include results that were affected by at least one coding error. Many of these errors probably had a trivial effect, but I am sure others did not.
So, before you even run your code on the actual data, choose several test values in each region, and for each test value generate the data you would expect if that test value were true and validate that your analysis code recovers the test value from the generated data. When the analysis code passes all tests, run it against the real data.
I applaud this idea, though with one addition: ideally you chould choose test values and expected results before you even write any of the simulation code.
This is part of a really helpful bug-preempting technique called Test Driven Design (or Behavior Driven Design, depending on who you ask and how fond they are of particular parts of this idea). Before you add anything to your main code, you first write a test for that code's functionality and make sure the test fails in the expected way. Once it does, you can start writing code to make it pass... but you... (read more)