Re: [SystemSafety] Functional hazard analysis, does it work?

From: DREW Rae < >
Date: Wed, 20 Jan 2016 17:11:33 +1000

For thirty years advocates of formal methods have shown a total lack of understanding of their inherent limitations. This has nothing to do with "maths, that's hard" or "we don't have all the information". It is the fundamental problem of requirements engineering that strengthening internal validity weakens external validity. The more effort you put into creating an analysable model of the real world, the less that model looks like the real world and the greater the chance that the safety problems will exist outside the analysis altogether.

Now that _isn't_ an insurmountable problem, but the rest of us have lost patience with formal gurus who don't even admit it's a problem. If you come to me and say, "I've got a technique that allows rapid, reliable and intuitive modelling of systems in a way that captures those properties important to safety", I'll say "that sounds interesting, I'd like to try it". When you show willful ignorance of what the _problem_ is, why should I have any reason to think that your amazing solution is any different from the hundreds of others who have made the exact same claim you've made, only to turn out to be wrong due to the problems you're refusing to admit are problems?

You sound exactly like the homeopaths who protest that scientists won't fund clinical trials of their latest preparation. Every one of them claims that their preparation is a good one, and if people wouldn't only try it they'd realise the benefits. They are willfully blind of the fact that the likelihood of them being right is so low, that it isn't worth the opportunity cost to even investigate their claims. If you have a better technique for hazard analysis, the burden of proof is squarely on you to show it. Not to demonstrate narrow properties, but to show that if a company is designing a system, they are more likely to make a better system by using your technique. At the very least, stop being patronising to those who demand a little more rigour than "people of our choosing applied the technique to a problem of our choosing, and our interpretation is that the results were great".

My safety podcast:
My mobile (from October 6th): 0450 161 361

On 20 January 2016 at 16:35, Peter Bernard Ladkin < ladkin_at_xxxxxx

> Hash: SHA256
> On 2016-01-19 13:31 , paul_e.bennett_at_xxxxxx > >
> > Completeness may be a difficult objective to prove in a mathematical
> sense but fortunately we
> > are only required to utilise our best efforts in this regard, using the
> most up to date and
> > emergent industry accepted techniques that are relevant to the
> development.
> On 2016-01-20 03:20 , DREW Rae wrote:
> > ...There's the internal completeness question discussed by Peter, which
> is in theory
> > determinstic if and only if you can create a complete causal model of
> your system for the
> > properties of interest, and keep it accurate.
> Interesting. I suggest an obviously desirable criterion for an effective
> HazAn (judging by the
> amount of time it takes up in hazard analysis meetings), and that there
> are techniques available
> for (often, sometimes, occasionally) ensuring it.
> There are two reactions. One is "that's *math*, thank goodness we don't
> have to do it!" The other
> is "it'll work if [you have all this information that no one ever has]."
> Neither of these
> gentlemen has, to my knowledge, ever performed an OHA.
> For thirty years, I've heard the refrain that "formal methods don't work",
> or, in a subtler
> version "formal methods may work if you can do all this stuff which math
> geeks can do, but none of
> us can so they're impractical". Meanwhile, it has become possible to
> deliver critical software
> which is demonstrably free of run-time error, which thirty years ago was
> not possible.
> Multiple firms offer that capability. It has become feasible, indeed
> routine, through the use of
> those methods which "don't work", that are "stuff which math geeks can do"
> and are "impractical."
> Isn't it time we started embracing straightforward methods which enable us
> to do things which we
> otherwise cannot do? To say, rather, "that sounds interesting - I'd like
> to try it!"
> Prof. Peter Bernard Ladkin, Faculty of Technology, University of
> Bielefeld, 33594 Bielefeld, Germany
> Je suis Charlie
> Tel+msg +49 (0)521 880 7319
> wg3B/VtwVKX3ZoucDwztr/WgacIkLPBCaWbeKVbaAPBVMiwvYxctdIuU/8tUVffR
> zZi7cx23DW7ycWhXKe2TqIyPqbFp7nFnB6eTfwpBs32AwOXHLJU1xh2tLE0AuhJD
> /bgqvhlmzf4Wq5pN9+djlGdMzEbYzheaz6uZFujQOXw7E3pQEL7O7axbtHuWlxsN
> nIZ2R5cLx7ng+zlyKc0Tc0Stk+ZeHnwtYnLXlpXnFpRXiLBmwooqyfVfy6O0OkvL
> w2o9hmuEh/Z8R2MS+2K6HYZMEBBIJccPhtj/rM6mmdhXzWVmBKg1Zb4RfEeRCm0=
> =hjmW
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx >

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Wed Jan 20 2016 - 08:12:00 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST