Re: [SystemSafety] Statistical Assessment of SW ......

From: Peter Bernard Ladkin < >
Date: Mon, 26 Jan 2015 14:21:41 +0100


On 2015-01-26 12:41 , DREW Rae wrote:
> A simple thought experiment. Let's say someone claims to have a suitable method of predicting
> combined hardware/software reliability.
> On what basis could they ever support that claim?

Um, using well-tried statistical methods associated with the Bernoulli, Poisson and exponential distributions, as taught in most basic statistics courses. (Such as http://www.math.uah.edu/stat/bernoulli/ http://www.math.uah.edu/stat/poisson/ Bev put me on to these. They are pretty good! I used to refer to Feller, but Bev thought that was "ancient". It's not that ancient. Came out the year before I was born.)

> I would argue that such a claim about a method
> intended for real-world use is empirical in nature, and can only be validated empirically.
> Unfortunately this requires an independent mechanism for counting the failures, and that there be
> enough failures to perform a statistical comparison of the prediction with reality.

Methods of assessing reliability of SW are normally predicated on *no failures having occurred for a certain number of trials*. Providing that no failures have been observed, the conclusion that the failures have a specified low occurrence rate may be drawn with a specified level of confidence, dependent on the number of trials observed. I mean, this is just basic statistical methodology, is it not?

That you are talking about failures, and counting failures, suggests to me that you're not au fait with the general approach to statistical assessment of SW reliability.

> Conclusion: No method for predicting hardware/software reliability can actually be shown to
> accurately predict hardware/software reliability. All claims about hardware/software reliability are
> constructed using methods that themselves haven't been adequately validated.

Dear me! <PBL restrains himself for fear of being voted off his own list :-) >

Any method of assessing the reliability of a system is going to rest on a considerable amount of uncertainty. Say you want to *prove* your system satisfies its spec. You say you have listed all the proof obligations of your SW? How reliable is the listing process? You say you've discharged all the proof obligations? How reliable is your proof checker? And so on. How can you deal with that uncertainty without using statistical methods at some point? I don't think you can.

And then there is practicality. Even if everything you say were to be true (note this is a counterfactual conditional!), large amounts of safety-relevant SW is now sold in the marketplace on the basis that the user may rely on it doing its job ("yes, problems have arisen but these are the measures used to fix them and the problems haven't occurred since"). The validity of such assurances is low to marginal. Much of the thrust of our approach to the statistics is to try to encourage people to keep better records and to pay attention to appropriate inference rather than saying "these hundred clients have been using it and previous versions for a decade and a half and only one has been unhappy enough with the product to go to arbitration about it". And for the clients of such vendors to demand appropriate statistics rather than be content with such claims.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Jan 26 2015 - 14:21:48 CET

This archive was generated by hypermail 2.3.0 : Sun Feb 17 2019 - 16:17:07 CET