AstroCJ comments on Error detection bias in research - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (36)
Ah, medium to strong disagree. I'm not far into my scientific career in $_DISCIPLINE, but any paper introducing a new "standard code" (i.e. one that you intend to use more than once) has an extensive section explaining how their code has accurately reproduced analytic results or agreed with previous simulations in a simpler case (simpler than the one currently being analysed). Most codes seem also to be open-source, since it's good for your cred if people are writing papers saying "Using x's y code, we analyse..." which means they need to be clearly written and commented - not a guarantee against pernicious bugs, but certainly a help. This error-checking setup is also convenient for those people generating analytic solutions, since they can find something pretty and say "Oh, people can use this to test their code.".
Of course, this isn't infallible, but sometimes you have to do 10 bad simulations before you can do 1 good one.
Except for those damned lazy biologists, of course.