Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

From: Ben Bradshaw < >
Date: Tue, 2 Jul 2013 09:50:43 +0000

Regarding the values quoted for 90% confidence and the number of hours of fault free testing, I believe that if you have 3 x 10^X hours of failure-free operation you can be 95% confident that the average failure rate is one failure every 10^X hours.

Ben Bradshaw BSc PhD MSc CEng MIMechE
Principal Engineer, Systems and Safety
TRW Conekt
Stratford Road
West Midlands B90 4GW

E-mail:     ben.bradshaw_at_xxxxxx
Tel:	+44 (0)121 627 3556
Fax:	+44 (0)121 627 3584 


This message, together with any of its attachments, is strictly confidential and intended solely for the addressee(s).  It may contain information which is covered by legal, professional or other privilege.  If you are not the intended recipient, you must not disclose, copy or take any action in reliance of this transmission.  If you have received this message in error, please notify us as soon as possible.

TRW Limited, Registered in England, No. 872948, Registered Office Address: Stratford Road, Solihull B90 4AX

-----Original Message-----
Sent: 02 July 2013 04:43
To: Steve Tockey
Cc: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] Qualifying SW as "proven in use" [Measuring Software]

On 2 Jul 2013, at 00:47, Steve Tockey <Steve.Tockey_at_xxxxxx

> I think that sounds good in theory but it may not work effectively in practice. The issue is that almost all of the test teams I know don't have inside (aka "white box") knowledge of the software they are testing. They are approaching it purely from an external ("black box") perspective. They can't tell if the code has high cyclomatic complexity or not.

That sounds like the wrong way to assure SW. If you want to be assured that the SW is reliable to the average frequency of one SW-caused failure in 10^X operational hours, you need to observe 3 x 10^X hours of failure-free operation to be 90% confident of it. Under the assumption that you have perfect failure detection.

I guess it's OK if you don't mind if your SW croaks every hundred or thousand hours or so. But that is hardly what one might term quality assurance.

> In principle, testers should be given the authority to reject a piece of crap product. That's their job in all other industries. But in software (non-critical, mind you), testing is usually a window dressing that's mostly overridden if it meant threatening promised ship dates.

By which I take it you mean failures are seen. In which case not even the above applies.

I had thought that the main point of testing a product which you hope to be of moderate quality was to make sure you have the requirements right and haven't forgotten some obvious things about the operating environment.


Prof. Peter Bernard Ladkin, University of Bielefeld and Causalis Limited _______________________________________________
The System Safety Mailing List

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Jul 02 2013 - 11:50:59 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:05 CEST