Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software] & [THE DARK NIGHT OF THE SOUL]

From: Steve Tockey < >
Date: Tue, 25 Jun 2013 19:22:00 +0000

Les,

"Can you cite your source for the "study of 18 production applications"? That would be interesting."

Yes: Mark Schroeder, ˇ°A Practical Guide to Object-Oriented Metricsˇ±, IT Pro, Nov/Dec 1999

"Back in the day I provided a McCabe complexity analysis service. The worst offenders were real-time operating systems."

The company I work for provides a "code quality survey" service. We've been doing these kinds of surveys on the order of one a week for the last 5-7 years. So we've seen millions and millions of lines of code in a variety of application domains. Let's just say that we've seen our fair share of good code, and we've also seen our fair share of completely crap code. Unfortunately due to Non-Disclosure Agreement stuff, I can't reveal the source of the worst offender I've seen to date.

Consider a single C++ class that's over 3400 lines of code all by itself. At an average of 55 lines of code per page, we're looking at a class where the source code listing is over 60 pages long. Already I'm sensing a problem here. But wait, because that single class had only one method. But it gets even worse, the cyclomatic complexity of that one method was over 2400. This means that in a single chunk of code that's spread over 60+ pages of source code listing, two out of every three lines of code is a decision of some sort.

I don't even remember the application domain that code came from (it was several years ago), but I'm pretty sure that code is a serious contender for the "butt-ugliest-code-ever award" if there were one. I think people will be hard pressed to find worse code than that.

Now, it's probably a safe bet that a significant portion of that code was really dead (unreachable) code. But still, one would have to work pretty hard to figure out which was the dead code and which wasn't.

"I gave up hope of ever providing
value to a client with a McCabe complexity analysis. I'd be handing out speeding tickets at the Indi 500. There are bigger fish to fry ... for example evasive action on the side effects of the non-declarative weakly typed PHP language."

Agreed. Sadly, however, but still agreed. As I've stated elsewhere (not here), "We are an industry of highly paid amateurs". But that doesn't mean we have to totally give up. There are ways to be dealing with things like these (e.g., "design-by-contract" is a personal favorite). I'm hoping that by being persistent, and showing the benefits of a more professional, deliberate approach to software development on real-world projects, we can start to bring some sanity to an otherwise pretty darned insane industry.

"Wasn't it Dostoyevsky who said that, "man is a creature who can get used to anything". I guess that's our fate."

Maybe, but then again maybe not. If people like us consistently remind others that there really is a better way, then maybe we can start to make a difference?

What I find so utterly ironic in all of this is that developing "safe" software is not only easier than doing it the way the vast majority of coders do it today, it ends up being significantly cheaper and quicker to develop as well. And, a factor of 4x to 8x cheaper to maintain in the long run, as well. Sigh...

Best,

-----Original Message-----
From: Les Chambers <les_at_xxxxxx Date: Monday, June 24, 2013 9:53 PM
Subject: RE: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software] & [THE DARK NIGHT OF THE SOUL]

Steve
Can you cite your source for the "study of 18 production applications"? That
would be interesting.
Back in the day I provided a McCabe complexity analysis service. The worst offenders were real-time operating systems. Today the problems caused by module level complexity are exacerbated by complex and fragmented architectures. If you take the bog standard dynamic web application as an example we have the following:
Assuming we implement an allegedly simple model, view, controller architecture there is:
- PHP controller code for each web page.

Cheers
Les

-----Original Message-----
Tockey
Sent: Tuesday, June 25, 2013 4:21 AM
Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

Actually, getting the evidence isn't that tricky, it's just a lot of work. Essentially all one needs to do is to run a correlation analysis (correlation coefficient) between the proposed quality measure on the one hand, and defect tracking data on the other hand.

For example, the code quality measure "Cyclomatic Complexity" (reference: Tom McCabe, ©řA Complexity Measure©÷, IEEE Transactions on Software Engineering, December, 1976) was validated many years ago by simply finding a strong positive correlation between the cyclomatic complexity of functions and the number of defects that were logged against those same functions (I.e., code in that function needed to be changed in order to repair that defect).

According to one study of 18 production applications, code in functions with cyclomatic complexity <=5 was about 45% of the total code base but this code was responsible for only 12% of the defects logged against the total code base. On the other hand, code in functions with cyclomatic complexity of >=15 was only 11% of the code base but this same code was responsible for 43% of the total defects. On a per-line-of-code basis, functions with cyclomatic complexity >=15 have more than an order of magnitude increase in defect density over functions measuring <=5.

What I find interesting, personally, is that complexity metrics for object-oriented software have been around for about 20 years and yet nobody (to my knowledge) has done any correlation analysis at all (or, at a minimum they have not published their results).

The other thing to remember is that such measures consider only the "syntax" (structure) of the code. I consider this to be *necessary* for code quality, but far from *sufficient*. One also needs to consider the "semantics" (meaning) of that same code. For example, to what extent is the code based on reasonable abstractions? To what extent does the code exhibit good encapsulation? What are the cohesion and coupling of the code? Has the code used "design-to-invariants / design-forchange"? One can have code that's perfectly structured in a syntactic sense and yet it's garbage from the semantic perspective. Unfortunately, there isn't a way (that I'm aware of, anyway) to do the necessary semantic analysis in an automated fashion. Some other competent software professionals need to look at the code and assess it from the semantic perspective.

So while I applaud efforts like SQALE and others like it, one needs to be careful that it's only a part of the whole story. More work--a lot more--needs to be done before someone can reasonably say that some particular code is "high quality".

Regards,

-----Original Message-----

Date: Friday, June 21, 2013 6:04 AM
<systemsafety_at_xxxxxx Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

I agree with Derek

Code quality is only a means to an end
We need evidence to to show the means actually helps to achieve the ends.

Getting this evidence is pretty tricky, as parallel developments for the same project won't happen.
But you might be able to infer something on average over multiple projects.

Derek M Jones wrote:

> Thierry,
> 
>> To answer your questions:
>> 1ˇĆ) Yes, there is some objective evidence that there is a correlation
>> between a low SQALE index and quality code.
> 
> How is the quality of code measured?
> 
> Below you say that SQALE DEFINES what is "good quality" code.
> In this case it is to be expected that a strong correlation will exist
> between a low SQALE index and its own definition of quality.
> 
>> For example ITRIS has conducted a study where the "good quality" code
>> is statistically linked to a lower SQALE index, for industrial
>> software actually used in operations.
> 
> Again how is quality measured?
> 
>> No, there is not enough evidence, we wish there would be more people
>> working on getting the evidence.
> 
> Is there any evidence apart from SQALE correlating with its own
> measures?
> 
> This is a general problem, lots of researchers create their own
> definition of quality and don't show a causal connection to external
> attributes such as faults or subsequent costs.
> 
> Without running parallel development efforts that
> follow/don't follow the guidelines it is difficult to see how
> reliable data can be obtained.
> 

--

Peter Bishop
Chief Scientist
Adelard LLP
Exmouth House, 3-11 Pine Street, London,EC1R 0JH http://www.adelard.com
Recep: +44-(0)20-7832 5850
Direct: +44-(0)20-7832 5855



The System Safety Mailing List
systemsafety_at_xxxxxx

The System Safety Mailing List
systemsafety_at_xxxxxx

The System Safety Mailing List
Received on Tue Jun 25 2013 - 21:22:12 CEST

This archive was generated by hypermail 2.3.0 : Wed Apr 24 2019 - 23:17:06 CEST