Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

From: Les Chambers < >
Date: Tue, 2 Jul 2013 07:36:30 +1000


Steve
One way to achieve this is to empower test teams. Management issues an encyclical that: SOFTWARE IS NOT A GIVEN. That is, "If it's too complex to test effectively - reject it. Don't waste your time composing feckless tests for crap software. Send it back to it heathen authors. Kill it before it gets into production.
Les

On 02/07/2013, at 3:16 AM, Steve Tockey <Steve.Tockey_at_xxxxxx

>
> Martyn,
> My preference would be that things like low cyclomatic complexity be considered basic standards of professional practice, well before one even started talking about a safety case. Software with ridiculous complexities shouldn't even be allowed to start making a safety case in the first place.
>
>
> -- steve
>
>
> From: Martyn Thomas <martyn_at_xxxxxx > Reply-To: "martyn_at_xxxxxx > Date: Monday, July 1, 2013 10:04 AM
> Cc: "systemsafety_at_xxxxxx > Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]
>
> Steve
>
> It would indeed be hard to make a strong safety case for a system whose software was "full of defects".
>
> High cyclomatic complexity may make this more likely and if a regulator wanted to insist on low complexity as a certification criterion I doubt that few would complain. Simple is good - it reduces costs, in my experience.
>
> But if a regulator allowed low complexity as a evidence for an acceptably low defect density, as part of a safety case, then I'd have strong reservations. Let me put it this way: if there's serious money to be made by developing a tool that inputs arbitrary software and outputs software with low cyclomatic complexity, there won't be a shortage of candidate tools - but safety won't improve. And if you have a way to prove, reliably, that the output from such a tool is functionally equivalent to the input, then that's a major breakthrough and I'd like to discuss it further.
>
> Martyn
>
> On 01/07/2013 17:18, Steve Tockey wrote:

>> Martyn,
>> 
>> "The safety goal is to have sufficient evidence to justify high
>> confidence that the software has specific properties that have been
>> determined to be critical for the safety of a particular system in a
>> particular operating environment."
>> 
>> Agreed, but my fundamental issue is (ignoring the obviously contrived
>> cases where the defects are in non-safety related functionality) how could
>> software--or the larger system it's embedded in--be considered "safe" if
>> the software is full of defects? Surely there are many elements that go
>> into making safe software. But just as surely, IMHO, the quality of that
>> software is one of those elements. And if we can't get the software
>> quality right, then the others might be somewhat moot?

>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx


The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Jul 01 2013 - 23:36:58 CEST

This archive was generated by hypermail 2.3.0 : Sun Feb 17 2019 - 16:17:05 CET