Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

From: Martyn Thomas < >
Date: Mon, 01 Jul 2013 18:04:58 +0100


It would indeed be hard to make a strong safety case for a system whose software was "full of defects".

High cyclomatic complexity may make this more likely and if a regulator wanted to insist on low complexity as a certification criterion I doubt that few would complain. Simple is good - it reduces costs, in my experience.

But if a regulator allowed low complexity as a /evidence/ for an acceptably low defect density, as part of a safety case, then I'd have strong reservations. Let me put it this way: if there's serious money to be made by developing a tool that inputs arbitrary software and outputs software with low cyclomatic complexity, there won't be a shortage of candidate tools - but safety won't improve. And if you have a way to prove, reliably, that the output from such a tool is functionally equivalent to the input, then that's a major breakthrough and I'd like to discuss it further.


On 01/07/2013 17:18, Steve Tockey wrote:
> Martyn,
> "The safety goal is to have sufficient evidence to justify high
> confidence that the software has specific properties that have been
> determined to be critical for the safety of a particular system in a
> particular operating environment."
> Agreed, but my fundamental issue is (ignoring the obviously contrived
> cases where the defects are in non-safety related functionality) how could
> software--or the larger system it's embedded in--be considered "safe" if
> the software is full of defects? Surely there are many elements that go
> into making safe software. But just as surely, IMHO, the quality of that
> software is one of those elements. And if we can't get the software
> quality right, then the others might be somewhat moot?

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Jul 01 2013 - 19:05:14 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:05 CEST