Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

From: Steve Tockey < >
Date: Mon, 1 Jul 2013 22:47:24 +0000

I think that sounds good in theory but it may not work effectively in practice. The issue is that almost all of the test teams I know don't have inside (aka "white box") knowledge of the software they are testing. They are approaching it purely from an external ("black box") perspective. They can't tell if the code has high cyclomatic complexity or not.

In principle, testers should be given the authority to reject a piece of crap product. That's their job in all other industries. But in software (non-critical, mind you), testing is usually a window dressing that's mostly overridden if it meant threatening promised ship dates.

ҵ iPad

One way to achieve this is to empower test teams. Management issues an encyclical that: SOFTWARE IS NOT A GIVEN. That is, "If it's too complex to test effectively - reject it. Don't waste your time composing feckless tests for crap software. Send it back to it heathen authors. Kill it before it gets into production. Les

My preference would be that things like low cyclomatic complexity be considered basic standards of professional practice, well before one even started talking about a safety case. Software with ridiculous complexities shouldn't even be allowed to start making a safety case in the first place.

From: Martyn Thomas <martyn_at_xxxxxx Date: Monday, July 1, 2013 10:04 AM
Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]


It would indeed be hard to make a strong safety case for a system whose software was "full of defects".

High cyclomatic complexity may make this more likely and if a regulator wanted to insist on low complexity as a certification criterion I doubt that few would complain. Simple is good - it reduces costs, in my experience.

But if a regulator allowed low complexity as a evidence for an acceptably low defect density, as part of a safety case, then I'd have strong reservations. Let me put it this way: if there's serious money to be made by developing a tool that inputs arbitrary software and outputs software with low cyclomatic complexity, there won't be a shortage of candidate tools - but safety won't improve. And if you have a way to prove, reliably, that the output from such a tool is functionally equivalent to the input, then that's a major breakthrough and I'd like to discuss it further.


On 01/07/2013 17:18, Steve Tockey wrote:


"The safety goal is to have sufficient evidence to justify high confidence that the software has specific properties that have been determined to be critical for the safety of a particular system in a particular operating environment."

Agreed, but my fundamental issue is (ignoring the obviously contrived cases where the defects are in non-safety related functionality) how could software--or the larger system it's embedded in--be considered "safe" if the software is full of defects? Surely there are many elements that go into making safe software. But just as surely, IMHO, the quality of that software is one of those elements. And if we can't get the software quality right, then the others might be somewhat moot?

The System Safety Mailing List

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Jul 02 2013 - 00:47:38 CEST

This archive was generated by hypermail 2.3.0 : Sun Feb 17 2019 - 16:17:05 CET