Re: [SystemSafety] Software reliability (or whatever you would prefer to call it)

From: Peter Bernard Ladkin < >
Date: Tue, 10 Mar 2015 07:13:22 +0100

On 2015-03-09 17:54 , RICQUE Bertrand (SAGEM DEFENSE SECURITE) wrote:
> If a system implementing a software fails in front of the inputs of the real world it is:
> · Either because these inputs had been foreseen by the specification, but the software
> doesn’t implement properly the specification and this was not detected by the tests. So the software
> is WRONG.
> · Either because these inputs had not been foreseen by the specification, but however the
> software implements properly the specification. So the specification is WRONG.
> · Either because something happens in the hardware, and the software does not operate as
> planned.

That seems to be right for a uniprocessor, whose internal communications are regarded as part of the HW.

> Any probabilistic assessment of a system implementing a software will merge all of above.

Not necessarily.

If you have a reliable means of telling when an input (including all causally relevant environmental parameters) is out of range of those foreseen by the specification, then you can distinguish the first two. This is often done and has been done.

Similarly, in critical failure cases, a causal investigation will often determine the contributions of HW and other components to the failure. In cases of rare failure then the effort to analyse in depth is often made. In civil aerospace, for example, where failure of certain sorts prima facie contravenes the certification requirements, the analysis is always made.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Mar 10 2015 - 07:13:29 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST