Re: [SystemSafety] Software reliability (or whatever you would prefer to call it)

Date: Mon, 9 Mar 2015 17:54:28 +0100

Well, can anybody tell me where I am wrong.

If a system implementing a software fails in front of the inputs of the real world it is:

· Either because these inputs had been foreseen by the specification, but the software doesn’t implement properly the specification and this was not detected by the tests. So the software is WRONG.

· Either because these inputs had not been foreseen by the specification, but however the software implements properly the specification. So the specification is WRONG.

· Either because something happens in the hardware, and the software does not operate as planned.

Any probabilistic assessment of a system implementing a software will merge all of above.

Bertrand Ricque
Program Manager
Optronics and Defence Division
Sights Program
Mob : +33 6 87 47 84 64
Tel : +33 1 58 11 96 82

Sent: Sunday, March 08, 2015 3:03 PM
To: systemsafety_at_xxxxxx Cc: ladkin Ladkin
Subject: [SystemSafety] Software reliability (or whatever you would prefer to call it)

As I am the other half of the authorial duo that has prompted this tsunami of postings on our list, my friends may be wondering why I’ve kept my head down. Rather mundane reason, actually - I’ve been snowed under with things happening in my day job (and I’m supposed to be retired…).

So I’d like to apologise to my friend and co-author of the offending paper, Peter Ladkin, for leaving him to face all this stuff alone. And I would like to express my admiration for his tenacity and patience in dealing with it over the last few days. I hope others on this list appreciate it too!

I can’t respond here to everything that has been said, but I would like to put a few things straight.

First of all, the paper in question was not intended to be at all controversial - and indeed I don’t think it is. It has a simple purpose: to clean up the currently messy and incoherent Annex D of 61508. Our aim here was not to innovate in any way, but to take the premises of the original annex, and make clear the assumptions underlying the (very simple) mathematics/statistics for any practitioners who wished to use it. The technical content of the annex, such as it is, concerns very simple Bernoulli and Poisson process models for (respectively) on-demand (discrete time) and continuous time software-based systems. Our paper addresses the practical concerns that a potential user of the annex needs to address - in order, for example, to use the tables there. Thus there is an extensive discussion of the issue of state, and how this affects the plausibility of the necessary assumptions needed to justify claims for Bernoulli or Poisson behaviour.

Note that there is no advocacy here. We do not say “Systems necessarily fail in Bernoulli/Poisson processes, so you must assess their reliability in this way”. Whilst these are, we think, plausible models for many systems, they are clearly not applicable to all systems. Our concern was to set down what conditions a user would need to assure in order to justify the use of the results of the annex. If his system did not satisfy these requirements, then so be it.

So why has our innocuous little offering generated so much steam?

Search me. But reading some of the postings took me back forty years. “There’s no such thing as software reliability.” "Software is deterministic (or its failures are systematic) therefore probabilistic treatments are inappropriate.” Even, God help us, “Software does not fail.” (Do these people not use MS products?) “Don’t bother me with the science, I’m an engineer and I know what’s what” (is that an unfair caricature of a couple of the postings?). “A lot of this stuff came from academics, and we know how useless and out-of-touch with the real world they are (scientific peer-review? do me a favour - just academics talking to one another)”. Sigh.

Here are a few comments on a couple of the topics of recent discussions. Some of you may wish to stop reading here!

1 Deterministic, systematic…and stochastic.

Here is some text I first used thirty years ago (only slightly modified). This is not the first time I’ve had to reuse it in the intervening years.
"It used to be said – in fact sometimes still is – that 'software failures are systematic and therefore it does not make sense to talk of software reliability'. It is true, of course, that software fails systematically, in the sense that if a program fails in certain circumstances, it will always fail when those circumstances are exactly repeated. Where then, it is asked, lies the uncertainty that requires the use of probabilistic measures of reliability?
"The main source of uncertainty lies in software’s interaction with the world outside. There is inherent uncertainty about the inputs it will receive in the future, and in particular about when it will receive an input that will cause it to fail. Execution of software is thus a stochastic (random) process. It follows that many of the classic measures of reliability that have been used for decades in hardware reliability are also appropriate for software: examples include failure rate (for continuously operating systems, such as reactor control systems); probability of failure on demand (pfd) (for demand-based systems, such as reactor protection systems); mean time to failure; and so on.
"This commonality of measures of reliability between software and hardware is important, since practical interest will centre upon the reliability of systems comprising both. However, the mechanism of failure of software differs from that of hardware, and we need to understand this in order to carry out reliability evaluation.” (it goes on to discuss this - no room to do it here)

At the risk of being repetitive: The point here is that uncertainty - "aleatory uncertainty" in the jargon - is an inevitable property of the failure process. You cannot eliminate such uncertainty (although you may be able to reduce it). The only candidate for a quantitative calculus of uncertainty is probability. Thus the failure process is a stochastic process.

Similar comments to the above can be made about “deterministic” as used in the postings. Whilst this is, of course, an important and useful concept, it has nothing to do with this particular discourse.

2. Terminology, etc.

Serious people have thought long and hard about this. The Avizienis-Laprie-Randell-Neumann paper is the result of this thinking. You may not agree with it (I have a few problems myself), but it cannot be dismissed after a few moments thought, as it seems to have been in a couple of postings. If you have problems with it, you need to engage in serious debate. It’s called science.

3. You can’t measure it, etc.

Of course you can. Annex D of 61508, in its inept way, shows how - in those special circumstances that our note addresses in some detail.

Society asks “How reliable?”, “How safe?”, “Is it safe enough?”, even “How confident are you (and should we be) in your claims?” The first three are claims about the stochastic processes of failures. If you don’t accept that, how else would you answer? I might accept that you are a good engineer, working for a good company, using best practices of all kinds - but I still would not have answers to the first three questions.

The last question above raises the interesting issue of epistemic uncertainty about claims for systems. No space to discuss that here - but members of the list will have seen Martyn Thomas’ numerous questions about how confidence will be handled (and his rightful insistence that it must be handled).

4. But I’ll never be able to claim 10^-9….

That’s probably true.

Whether 10^-9 (probability of failure per hour) is actually needed in aerospace is endlessly debated. But you clearly need some dramatic number. Years ago, talking to Mike deWalt about these things, he said that the important point was that aircraft safety needed to improve continuously. Otherwise, with the growth of traffic, we would see more and more frequent accidents, and this would be socially unacceptable. The current generation of airplanes are impressively safe, so new ones face a very high hurdle. Boeing annually provide a fascinating summary of detailed statistics on world-wide airplane safety (<>). From this you can infer that current critical computer systems have demonstrated, in hundreds of millions of hours of operation, something like 10^-8 pfh (e.g. for the Airbus A320 and its ilk). To satisfy Mike’s criterion, new systems need to demonstrate that they are better than this. This needs to be done before they are certified. Can it?

Probably not. See Butler and Finelli (IEEE Trans Software Engineering, 1993), or Littlewood and Strigini (Comm ACM, 1993) for details.

Michael Holloway’s quotes from 178B and 178C address this issue, and have always intrigued me. The key phrase is "...currently available methods do not provide results in which confidence can be placed at the level required for this purpose…” Um. This could be taken to mean: “Yes, we could measure it, but for reasons of practical feasibility, we know the results would fall far short of what’s needed (say 10^-8ish). So we are not going to do it.” This feels a little uncomfortable to me. Perhaps best not to fly on a new aircraft type until it has got a few million failure-free hours under its belt (as I have heard a regulator say).

By the way, my comments here are not meant to be critical of the industry’s safety achievements, which I think are hugely impressive (see the Boeing statsum data).

5. Engineers, scientists…academics...and statisticians...

…a descending hierarchy of intellectual respectability?

With very great effort I’m going to resist jokes about alpha-male engineers. But I did think Michael’s dig at academics was a bit below the belt. Not to mention a couple of postings that appear to question the relevance of science to engineering. Sure, science varies in quality and relevance. As do academics. But if you are engineering critical systems it seems to me you have a responsibility to be aware of, and to use, the best relevant science. Even if it comes from academics. Even if it is statistical.

My apologies for the length of this. A tentative excuse: if I’d spread it over several postings, it might have been even longer…



Bev Littlewood
Professor of Software Engineering
Centre for Software Reliability
City University London EC1V 0HB

Phone: +44 (0)20 7040 8420 Fax: +44 (0)20 7040 8585

" Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles, être soumis aux règlementations relatives au contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont pas destinés, nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le contenu. Toute exportation ou réexportation non autorisée est interdite Si ce message vous a été transmis par erreur, merci d'en informer l'expéditeur et de supprimer immédiatement de votre système informatique ce courriel ainsi que tous les documents qui y sont attachés."

" This e-mail and any attached documents may contain confidential or proprietary information and may be subject to export control laws and regulations. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. Unauthorized export or re-export is prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system."

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Mar 09 2015 - 17:54:45 CET

This archive was generated by hypermail 2.3.0 : Sat Feb 16 2019 - 18:17:06 CET