Re: [SystemSafety] Software reliability (or whatever you would prefer to call it)

Date: Tue, 10 Mar 2015 11:43:51 +0100

Hi Peter,

On the last point I obviously agree and this is somewhat where I wanted to go in the discussion.

Is it the objective of annex D to enter in trying to sort out the origin of the failure (HW / specification / wrong software from the beginning not well tested) which would mean opening the black box ?

Or to the opposite, and potentially because it would not be realistic, is it the objective to keep the black box close, with the consequence of not being able to distinguish "software alone" from "system" ?

In this last case, shouldn't annex D be in part 2 ?

Bertrand Ricque
Program Manager
Optronics and Defence Division
Sights Program
Mob : +33 6 87 47 84 64
Tel : +33 1 58 11 96 82

-----Original Message-----
Sent: Tuesday, March 10, 2015 7:13 AM
To: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] Software reliability (or whatever you would prefer to call it)

On 2015-03-09 17:54 , RICQUE Bertrand (SAGEM DEFENSE SECURITE) wrote:
> If a system implementing a software fails in front of the inputs of the real world it is:
> · Either because these inputs had been foreseen by the specification, but the software
> doesn't implement properly the specification and this was not detected
> by the tests. So the software is WRONG.
> · Either because these inputs had not been foreseen by the specification, but however the
> software implements properly the specification. So the specification is WRONG.
> · Either because something happens in the hardware, and the software does not operate as
> planned.

That seems to be right for a uniprocessor, whose internal communications are regarded as part of the HW.

> Any probabilistic assessment of a system implementing a software will merge all of above.

Not necessarily.

If you have a reliable means of telling when an input (including all causally relevant environmental parameters) is out of range of those foreseen by the specification, then you can distinguish the first two. This is often done and has been done.

Similarly, in critical failure cases, a causal investigation will often determine the contributions of HW and other components to the failure. In cases of rare failure then the effort to analyse in depth is often made. In civil aerospace, for example, where failure of certain sorts prima facie contravenes the certification requirements, the analysis is always made.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie Tel+msg +49 (0)521 880 7319

The System Safety Mailing List
systemsafety_at_xxxxxx #
" Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles, être soumis aux règlementations relatives au contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont pas destinés, nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le contenu. Toute exportation ou réexportation non autorisée est interdite Si ce message vous a été transmis par erreur, merci d'en informer l'expéditeur et de supprimer immédiatement de votre système informatique ce courriel ainsi que tous les documents qui y sont attachés."

" This e-mail and any attached documents may contain confidential or proprietary information and may be subject to export control laws and regulations. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. Unauthorized export or re-export is prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." #

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Mar 10 2015 - 11:43:59 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST