Re: [SystemSafety] Functional hazard analysis, does it work?

From: Peter Bernard Ladkin < >
Date: Tue, 19 Jan 2016 08:49:33 +0100


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2016-01-19 01:42 , Matthew Squair wrote:
> Does the process of functional hazard analysis 'work' in terms of identifying all functional
> hazards that we are, or should be, interested in?

Ah, the question of completeness! Which some people think is of its nature not answerable in the positive. Hazard analysis teams spend much of their time discussing whether the analysis is complete, and any decent HazAn method comes with a usually informal relative completeness test (many of them are not "decent"!). So it seems to me very odd that people also claim that "completeness is impossible".

I think it is necessary during a HazAn to formulate an objective criterion of relative completeness and show you have identified all possible hazards according to that criterion. Then ask yourselves what phenomena there are which are not covered by the criterion and attempt to characterise those.

One way to formulate the criterion is to develop an ontology. The system consists of a collection of objects with properties and relations between them. List them. All. Then you can argue that functional hazards are those hazards which are expressible in that vocabulary and with a bit of luck and a lot of rigor you can list them all and show that you have done so.

This is what we do and it works. We have a name for it: Ontological Hazard Analysis (OHA).

You might like to look at Daniel Jackson's talk "How to Prevent Disasters" from November 2010 in http://people.csail.mit.edu/dnj/talks/ It, and the ensuing discussion on the York list, arose out of Daniel's observation through use of formal analysis that an example in Nancy Leveson's book did not render a complete hazard analysis. Jan Sanders had a go at the example with OHA and found some features which Daniel's analysis had also not identified. The discussion on the York list is archived at https://www.cs.york.ac.uk/hise/safety-critical-archive/2010/ and starts with Daniel's message of October 10, 2010 entitled "software hazard analysis not useful?". I should probably write a summary at some point, since this issue recurs.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de

-----BEGIN PGP SIGNATURE----- iQEcBAEBCAAGBQJWneqNAAoJEIZIHiXiz9k+WJYH/3LAvUlQhMkVaB0e7Q+qejJC ofuuCH5oMOGiHcdVru0Xyy12GJKM0ZafkqYKmAXLx6tO8d6M4UzeSRRt5FwsUFQI 4+CIJ1eeIXnqgXukSez3kcEnAyHeaTgr5BmCYg4KFQpvR3PyOQzmiI/FFSjZNyhy ahogMjamOT9fArPPYwtyLVDMClEIDvh50s1IBhH3d8sRJ6svD5KNxISGX3CnIZlF Yeb9Ko6kVABJjgzH0v14VIsg8BjmsFYHj9csR6m2QUw2Yi9x4EaqxRcgOnW5JoO0 S3LyzZdUR9yqskmRKIvLkAog64kXjSPb8OIburuvjNcKKCoTDqpTRfwXOS5r3LQ= =1ObS
-----END PGP SIGNATURE-----



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Jan 19 2016 - 08:49:41 CET

This archive was generated by hypermail 2.3.0 : Fri Feb 22 2019 - 05:17:08 CET