I've read the paper you refer to, very interesting data indeed. The quote is one of five possible explenations of why the results differ so much, but it certainly is a good possibility.
This post sparked my interest/doubt knob for now. I will question more 'facts' in the SE world from now on.
About sommerville: Sommerville website: http://www.comp.lancs.ac.uk/computing/resources/IanS/
The book I refer to: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/index.html
You can download presentations of his chapters here: http://www.comp.lancs.ac.uk/computi...
You make me curious about your book, perhaps I'll read it. Thanks for the extensive answer. Could'nt agree more with what you're saying. I can see why this 'cost of change curve' actually might not exist at all.
Made me wonder, I recently found a graph by Sommerville telling the exact story about these cost of change. I wonder what its source is for that graph .. ;)
Interesting read to begin with. Nice anology. I do support the thought that claims made (in any field) should have data to back it up.
I do think at this point that , even though there is no 'hard scientific data' to claim it; Don't we have enough experience to know that once software is in operation, when bugs are found they cost more the fix than initially?
(Bugs are also in my opinion features that do not meet the expectations)
Even though the chart may be taken out of context, and a bit taken too far I don't think it belongs to the infamous quotes like &...
Since I think more people should know about this, I have made a question on Stackoverflow about it: http://stackoverflow.com/questions/9182715/is-it-significantly-costlier-to-fix-a-bug-at-the-end-of-the-project