Re: [SystemSafety] Standards and Reform [was: degraded software performance...]

From: Peter Bernard Ladkin < >
Date: Fri, 06 Mar 2015 09:35:27 +0100

Les Chambers suggests that procedures should only make it into standards if they have been empirically verified. Good idea, one might think.

So let's take risk analysis and risk assessment out of all safety standards, because that process has never been shown to result in a system less prone to dangerous failures. In fact I can't think of one instance in which a system was developed with, respectively without, risk analysis and the results compared.

Let's also take these "extremely improbable"/10^(-9) criteria out of the aerospace certification criteria and other standards for PE kit, because as Littlewood and Strigini showed 23 years ago you can't accumulate evidence that these criteria are ever fulfilled.

And let's take code inspection out of development procedures, because there is mostly no evidence that the semantics of the source code statements, which is what is used in inspections, are fulfilled by the installed kit.

And let's take out all this talk about whether software degradation is really hardware or software. And whether "software reliability" is a meaningful phrase. And whether "software failure" is a meaningful phrase. The phenomena exist, but the claimed distinctions can't be decided empirically, only conceptually.

But we can, thankfully, leave the interpretation of software execution as a Bernoulli process in, because there is in fact quite a lot of evidence that this works, albeit at lower values of TTNF than what are typically needed for Part 25.1309(b), respectively higher SIL levels.

Be careful what you wish for! As observers have known for a very long time, observations are theory-laden. Your take on what observations mean depends decisively on your existing conceptual baggage.

On the reform of standards processes, I'm all for it. See my White Paper So is John Knight And so is Martyn Thomas. Indeed, we've been discussing this for years. One of my themes is lack of adequate peer review in the standardisation process.

Small steps first. My papers are just two of four papers on statistical evaluation recently presented to the German safe-software committee. The other two are written by academics and in my view (not just in my view) contain significant mistakes. I have encouraged the authors to publish their writings, or at least to open them to broader peer review by people competent in the subject matter, so far without effect.

The broader matter getting in the way is the issue of peer review and publishing versus the confidentiality presumption in standards work. The confidentiality presumption is there to protect intellectual property. It is necessary because market rivals sit in the same room. However, most if not all methods standards such as IEC 61508 involve no intellectual property issues whatever. Les is quite right that many of them are put together by small cliques. I would add ..... sometimes without the necessary expertise in the subject matter, see below.

Motivated people could get involved in reform. It's OK to chatter with me at the back of an Safety-Critical Systems Symposium room while breaking wind. It's OK to natter on a mailing list. But this has negligible effect. Consider, as I did and have done, forming an interest group with a view to promoting certain selected reforms, to propose that, and to keep on about it - for decades if necessary - until the reforms are taken on board in some form. That's a level of commitment which almost all engineers decline. Indeed, people such as Bertrand have been pointing out for a long time that one can influence standards processes in at least one straightforward way - turn up for standards committee meetings and argue for your view!

For a year and a half, I was on an IEC committee devising a new standard for root cause analysis. I invited half a dozen people to advise, all of them recognised inventors of or distinguished proponents of particular methods for root cause analysis.

On the committee there was no one besides myself with experience of using RCA for accident analysis. Those on the committee with RCA experience - and there were committee members without - were quality-control people. Fishbone diagrams and "Five Why's." Indeed, only one person besides myself knew anything about accident analysis methods.

I asked my advisors for a two-page write-up on their respective methods according to a specific format. I had one reply - multiple replies - from a colleague who had trouble understanding the concept of "two". And another reply sending me swathes of tech reports. Otherwise, nothing. (I resigned from the committee a year later because I strongly disagreed with their procedures.)

There is now an international standard IEC 62740 on Root Cause Analysis. Take a look. It could have been so much better had most of my advisors responded with the requested two pages. That was a clear opportunity for genuine peer review and peer contribution, and it was missed, despite my best efforts.

What I got from the process was Those are the write-ups on the methods which I ended up doing myself. Oh, and a complaint from IEC TC 56 to the DKE, that I was not a "team player" (read: clique member) and to consider whether I was an appropriate delegate.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Fri Mar 06 2015 - 09:35:37 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST