All of stefanhendriks's Comments + Replies

I've read the paper you refer to, very interesting data indeed. The quote is one of five possible explenations of why the results differ so much, but it certainly is a good possibility.

This post sparked my interest/doubt knob for now. I will question more 'facts' in the SE world from now on.

About sommerville: Sommerville website: http://www.comp.lancs.ac.uk/computing/resources/IanS/

The book I refer to: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/index.html

You can download presentations of his chapters here: http://www.comp.lancs.ac.uk/computi... (read more)

You make me curious about your book, perhaps I'll read it. Thanks for the extensive answer. Could'nt agree more with what you're saying. I can see why this 'cost of change curve' actually might not exist at all.

Made me wonder, I recently found a graph by Sommerville telling the exact story about these cost of change. I wonder what its source is for that graph .. ;)

0Morendil
I'm interested in your source for that graph. Googling a bit for stuff by Sommerville, I come across a pie chart for "distribution of maintenance effort" which has all the hallmarks of a software engineering meme: old study, derived from a survey (such self-reports are often unreliable owing to selection bias and measurement bias), but still held to be current and generally applicable and cited in many books even though more recent research casts doubt on it. Here's a neat quote from the linked paper (LST is the old study): I love it that 10% of managers can provide a survey response based on "no data". :)

Interesting read to begin with. Nice anology. I do support the thought that claims made (in any field) should have data to back it up.

I do think at this point that , even though there is no 'hard scientific data' to claim it; Don't we have enough experience to know that once software is in operation, when bugs are found they cost more the fix than initially?

(Bugs are also in my opinion features that do not meet the expectations)

Even though the chart may be taken out of context, and a bit taken too far I don't think it belongs to the infamous quotes like &... (read more)

8Morendil
ISTM that you're making a great argument that the defects claim is in the same category as the "10% of the brain" category. Let me explain. To a layman, not well versed in neuroanatomy, the 10% thing has surface plausibility because of association between brain size and intelligence (smaller brained animals are dumber, in general), and because of the observed fact that some humans are massively smarter than others (e.g. Einstein, the paradigmatic case). Therefore, someone with the same size brain who's only "normal" in IQ compared to Einstein must not be using all of that grey matter. Of course, as soon as you learn more of what we actually know about how the brain works, for instance the results on modularity, the way simulated neural networks perform their functions, and so on - then the claim loses its plausibility, as you start asking which 90% we're supposed not to be using, and so on. Similarly, someone with a poor understanding of "defects" assumes that they are essentially physical in nature: they are like a crack in cement, and software seems like layer upon layer of cement, so that if you need to reach back to repair a crack after it's been laid over, that's obviously harder to fix. But software defects are nothing like defects in physical materials. The layers of which software is built are all equally accessible, and software doesn't crack or wear out. The problem is a lot more like writing a novel in which a heroine is dark-haired, complete with lots of subtle allusions or maybe puns referencing that hair color, and then deciding that she is blonde after all. As you observe, the cost of fixing a defect is not a single category, but in fact decomposes in many costs which have fuzzy boundaries: * the cost of observing the erroneous behaviour in the first place (i.e. testing, whether a tester does it or a user) * the cost of locating the mistake in the code * the cost of devising an appropriate modification * the cost of changing the rest of the