mfb comments on Diseased disciplines: the strange case of the inverted chart - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
See this paper (PDF), page 54, for one study that did this type of analysis, the only such that I'm currently aware of.
If you're interested, you might want to graph the numerical results found there: you'll find that they totally fail to match up with the standard exponential curve.
And, again, it's worth thinking about this for a moment to think about the processes that generate the data: you generally don't know that a bug has been introduced at the moment it's introduced (otherwise you'd fix it straightaway), so there is a lot of opportunity for measurement bias there.
Similarly you don't always know how much a bug has cost, because there are many activities that make up the economic cost of defects: lost customers, support calls, figuring out the bug, figuring out the fix, changing the code, documenting things, training developers to avoid the bug in future... Which of these you coun't and don't count is rarely reported in the literature.
It's not even clear that you can always tell unambiguously what counts as a "bug". The language in the industry is woefully imprecise.
Thanks for the link. The table shows another problem: Bugs introduced in different phases are different. How do you compare "1 bug" in the preliminary design with "1 bug" of the style "if(a=b)"? Plotting a column as graph can be interesting (which was done in the original paper?), plotting a row looks nearly pointless.