DSimon comments on Diseased disciplines: the strange case of the inverted chart - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (150)
ISTM that you're making a great argument that the defects claim is in the same category as the "10% of the brain" category. Let me explain.
To a layman, not well versed in neuroanatomy, the 10% thing has surface plausibility because of association between brain size and intelligence (smaller brained animals are dumber, in general), and because of the observed fact that some humans are massively smarter than others (e.g. Einstein, the paradigmatic case). Therefore, someone with the same size brain who's only "normal" in IQ compared to Einstein must not be using all of that grey matter.
Of course, as soon as you learn more of what we actually know about how the brain works, for instance the results on modularity, the way simulated neural networks perform their functions, and so on - then the claim loses its plausibility, as you start asking which 90% we're supposed not to be using, and so on.
Similarly, someone with a poor understanding of "defects" assumes that they are essentially physical in nature: they are like a crack in cement, and software seems like layer upon layer of cement, so that if you need to reach back to repair a crack after it's been laid over, that's obviously harder to fix.
But software defects are nothing like defects in physical materials. The layers of which software is built are all equally accessible, and software doesn't crack or wear out. The problem is a lot more like writing a novel in which a heroine is dark-haired, complete with lots of subtle allusions or maybe puns referencing that hair color, and then deciding that she is blonde after all.
As you observe, the cost of fixing a defect is not a single category, but in fact decomposes in many costs which have fuzzy boundaries:
These costs are going to vary greatly according to the particular context. The cost of testing depends on the type of testing, and each type of testing catches different types of bugs. The cost of releasing new versions is very high for embedded software, very low for Web sites. The cost of poor quality is generally low in things like games, because nobody's going to ask for their money back if Lara Croft's guns pass through a wall or two; but it can be very high in automated trading software (I've personally touched software that had cost its owners millions in bug-caused bad trades). Some huge security defects go undetected for a long time, causing zero damage until they are found (look up the 2008 Debian bug).
The one thing that we know (or strongly suspect) from experience is always monotonically increasing as we add more code is "the cost of changing the rest of the software to reflect the modification". This increase applies whatever the change being made, which is why the "cost of change curve" is plausible. (The funny part of the story is that there never was a "cost of change curve", it's all a misunderstanding; the ebook tells the whole story.)
Of course, someone who is a) sophisticated enough to understand the decomposition and b) educated enough to have read about the claim is likely to be a programmer, which means that by the availability heuristic they're likely to think that the cost they know best is what dominates the entire economic impact of defects.
In fact, this is very unlikely to be the case in general.
And in fact, the one case where I have seen a somewhat credible study with detailed data (the Hughes Aircraft study), the data went counter to the standard exponential curve: it was expensive to fix a defect during the coding phase, but the (average per defect) cost then went down.
This is the second time in this thread that the analogy of software design as fiction writing has appeared, and I really quite like it. If it's not already popular, maybe it should be.
In my experience most user requirements documents are works of fantasy. It's our job as programmers to drag the genre closer to science fiction. Software testing is more like gritty hard-boiled detective fiction.
It's at least somewhat popular. I know that Paul Graham has often drawn comparisons between the two ideas.