Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

From: Les Chambers < >
Date: Wed, 3 Jul 2013 10:27:08 +1000


Steve

Code evaluation can sometimes be possible in unit level testing when you are dealing with smaller code bodies, especially if you have automated tools. Of course it should be done in the development environment. I agree with you that it's impossible in integration and system level testing. I heard an extreme case the other day: a Google tester related that, in his development career, he was making a small change to some server software when he noticed there were more than 100,000 artefacts in the build. Companies like Google face extreme real-world conditions. The same man went to the managers of Google server farms and laid out his machine requirements into the future for testing. He said, "These guys were pretty blunt. They said fine, you can have all that machine time but we'll have to shut Google down to give it to you." So the test team came up with an innovative solution: they tested every second mod. I hope nobody's life is on the line over a Google query.

You just can't get away from it: high-quality software is expensive.

Les  

From: Steve Tockey [mailto:Steve.Tockey_at_xxxxxx Sent: Tuesday, July 2, 2013 8:47 AM
To: Les Chambers
Cc: martyn_at_xxxxxx Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]    

Les,

I think that sounds good in theory but it may not work effectively in practice. The issue is that almost all of the test teams I know don't have inside (aka "white box") knowledge of the software they are testing. They are approaching it purely from an external ("black box") perspective. They can't tell if the code has high cyclomatic complexity or not.  

In principle, testers should be given the authority to reject a piece of crap product. That's their job in all other industries. But in software (non-critical, mind you), testing is usually a window dressing that's mostly overridden if it meant threatening promised ship dates.    

ҵ iPad  

On Jul 1, 2013, at 2:36 PM, "Les Chambers" <les_at_xxxxxx

Steve

One way to achieve this is to empower test teams. Management issues an encyclical that: SOFTWARE IS NOT A GIVEN. That is, "If it's too complex to test effectively - reject it. Don't waste your time composing feckless tests for crap software. Send it back to it heathen authors.

Kill it before it gets into production.

Les

On 02/07/2013, at 3:16 AM, Steve Tockey <Steve.Tockey_at_xxxxxx  

Martyn,

My preference would be that things like low cyclomatic complexity be considered basic standards of professional practice, well before one even started talking about a safety case. Software with ridiculous complexities shouldn't even be allowed to start making a safety case in the first place.    

From: Martyn Thomas <martyn_at_xxxxxx Reply-To: "martyn_at_xxxxxx Date: Monday, July 1, 2013 10:04 AM
Cc: "systemsafety_at_xxxxxx <systemsafety_at_xxxxxx Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]  

Steve

It would indeed be hard to make a strong safety case for a system whose software was "full of defects".

High cyclomatic complexity may make this more likely and if a regulator wanted to insist on low complexity as a certification criterion I doubt that few would complain. Simple is good - it reduces costs, in my experience.

But if a regulator allowed low complexity as a evidence for an acceptably low defect density, as part of a safety case, then I'd have strong reservations. Let me put it this way: if there's serious money to be made by developing a tool that inputs arbitrary software and outputs software with low cyclomatic complexity, there won't be a shortage of candidate tools - but safety won't improve. And if you have a way to prove, reliably, that the output from such a tool is functionally equivalent to the input, then that's a major breakthrough and I'd like to discuss it further.

Martyn

On 01/07/2013 17:18, Steve Tockey wrote:

Martyn,  

"The safety goal is to have sufficient evidence to justify high confidence that the software has specific properties that have been determined to be critical for the safety of a particular system in a particular operating environment."  

Agreed, but my fundamental issue is (ignoring the obviously contrived cases where the defects are in non-safety related functionality) how could software--or the larger system it's embedded in--be considered "safe" if the software is full of defects? Surely there are many elements that go into making safe software. But just as surely, IMHO, the quality of that software is one of those elements. And if we can't get the software quality right, then the others might be somewhat moot?  



The System Safety Mailing List
systemsafety_at_xxxxxx


The System Safety Mailing List
systemsafety_at_xxxxxx Received on Wed Jul 03 2013 - 02:27:30 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:05 CEST