stefanhendriks

Posts

Sorted by New

Wiki Contributions

Comments

Sorted by

I've read the paper you refer to, very interesting data indeed. The quote is one of five possible explenations of why the results differ so much, but it certainly is a good possibility.

This post sparked my interest/doubt knob for now. I will question more 'facts' in the SE world from now on.

About sommerville: Sommerville website: http://www.comp.lancs.ac.uk/computing/resources/IanS/

The book I refer to: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/index.html

You can download presentations of his chapters here: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/Presentations/index.html

I have based my findings on the presentations now, since I haven't got the book nearby. You can look them up yourself (download the chapters from the above link).

Chapter 7 says:

Requirements error costs are high so validation is very important • Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error.

Chapter 21, refers to Software Maintanance, claiming (might need to verify this as well? ;)) :

[Maintanance costs are] Usually greater than development costs (2 to 100 depending on the application).

Because I don't have the book nearby I cannot tell for certain where it was stated. But I was pretty certain it was stated in that book.

You make me curious about your book, perhaps I'll read it. Thanks for the extensive answer. Could'nt agree more with what you're saying. I can see why this 'cost of change curve' actually might not exist at all.

Made me wonder, I recently found a graph by Sommerville telling the exact story about these cost of change. I wonder what its source is for that graph .. ;)

Interesting read to begin with. Nice anology. I do support the thought that claims made (in any field) should have data to back it up.

I do think at this point that , even though there is no 'hard scientific data' to claim it; Don't we have enough experience to know that once software is in operation, when bugs are found they cost more the fix than initially?

(Bugs are also in my opinion features that do not meet the expectations)

Even though the chart may be taken out of context, and a bit taken too far I don't think it belongs to the infamous quotes like "you only use 10% of your brain". This claim btw is easier to "prove" wrong. You could measure brain activity and calculate the amount of % is used of the whole. Software however is much more complex.

It is much harder to prove if defects actually cost more to fix later than to fix early. I don't think the bugs themselves actually are more costly. Sure, some bugs will be more costly because of the increased complexity (compared to the not-yet-released-version), but most costs will come from the missed oppertunities. A concrete example would be an e-commerce website only supporting Visa Cards, while the customer expected it to have Visa Cards, but also Mastercard, and other creditcard vendor support. Clearly the website will miss income, the costs of this 'defect' will be much greater of this missed oppertunity than actually implementing the support. (yes, you need to back this up with numbers, but you get the point :)).

Kudos for pointing out this 'flaw', it takes some balls to do so ;)