DSimon comments on Error detection bias in research - Less Wrong

54 Post author: neq1 22 September 2010 03:00AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (36)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 22 September 2010 05:21:26PM 3 points [-]

So, before you even run your code on the actual data, choose several test values in each region, and for each test value generate the data you would expect if that test value were true and validate that your analysis code recovers the test value from the generated data. When the analysis code passes all tests, run it against the real data.

Comment author: DSimon 23 September 2010 05:41:19PM 0 points [-]

I applaud this idea, though with one addition: ideally you chould choose test values and expected results before you even write any of the simulation code.

This is part of a really helpful bug-preempting technique called Test Driven Design (or Behavior Driven Design, depending on who you ask and how fond they are of particular parts of this idea). Before you add anything to your main code, you first write a test for that code's functionality and make sure the test fails in the expected way. Once it does, you can start writing code to make it pass... but you aren't allowed to write code unrelated to that specific goal.

This technique makes sure that not only is your code thoroughly tested, but also that they do what you think they do, since they must fail in the expected way before you attempt to make them pass. I've also found that it helps a great deal in scoping and designing, since you must think thoroughly about how a piece of code will be used and what output it must produce before you write it.