Morendil comments on Diseased disciplines: the strange case of the inverted chart - Less Wrong

47 Post author: Morendil 07 February 2012 09:45AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (150)

You are viewing a single comment's thread. Show more comments above.

Comment author: Morendil 07 February 2012 08:54:28AM 0 points [-]

I'm interested in your source for that graph.

Googling a bit for stuff by Sommerville, I come across a pie chart for "distribution of maintenance effort" which has all the hallmarks of a software engineering meme: old study, derived from a survey (such self-reports are often unreliable owing to selection bias and measurement bias), but still held to be current and generally applicable and cited in many books even though more recent research casts doubt on it.

Here's a neat quote from the linked paper (LST is the old study):

(Possibly) participants in the survey from which LST was derived simply did not have adequate data to respond to the survey. The participating software maintenance managers were asked whether their response to each question was based on reasonably accurate data, minimal data, or no data. In the case of the LST question, 49.3% stated that their answer was based on reasonably accurate data, 37.7% on minimal data, and 8.7% on no data. In fact, we seriously question whether any respondents had ‘‘reasonably accurate data’’ regarding the percentage of effort devoted to the categories of maintenance included in the survey, and most of them may not have had even ‘‘minimal data.’’

I love it that 10% of managers can provide a survey response based on "no data". :)

Comment author: TheOtherDave 08 February 2012 04:28:30AM 1 point [-]

I love it that 10% of managers can provide a survey response based on "no data"

Far more than 10% of managers do that routinely. The interesting thing is that as many as 10% admitted it.

Comment author: stefanhendriks 07 February 2012 07:22:43PM 1 point [-]

I've read the paper you refer to, very interesting data indeed. The quote is one of five possible explenations of why the results differ so much, but it certainly is a good possibility.

This post sparked my interest/doubt knob for now. I will question more 'facts' in the SE world from now on.

About sommerville: Sommerville website: http://www.comp.lancs.ac.uk/computing/resources/IanS/

The book I refer to: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/index.html

You can download presentations of his chapters here: http://www.comp.lancs.ac.uk/computing/resources/IanS/SE8/Presentations/index.html

I have based my findings on the presentations now, since I haven't got the book nearby. You can look them up yourself (download the chapters from the above link).

Chapter 7 says:

Requirements error costs are high so validation is very important • Fixing a requirements error after delivery may cost up to 100 times the cost of fixing an implementation error.

Chapter 21, refers to Software Maintanance, claiming (might need to verify this as well? ;)) :

[Maintanance costs are] Usually greater than development costs (2* to 100* depending on the application).

Because I don't have the book nearby I cannot tell for certain where it was stated. But I was pretty certain it was stated in that book.