Re: [SystemSafety] NYTimes: The Next Accident Awaits

From: Mike Rothon < >
Date: Mon, 03 Feb 2014 11:31:52 +0000


I also feel that some of the problem is caused by the names "Safety Case" and "Safety Argument".

Could it be that these names are subconsciously associated legal cases / legal arguments and evoke images in the mind's eye of a courtroom drama, but one where a sharp suited, sharp witted safety professional convinces an emotional regulator that the guilty are innocent (or the innocent are guilty)? The reality of course is very different.

Anyway sometimes the lines between 'prescriptive' and goal-based / safety case regulation are blurred. For example prescriptive regulation can include an Acceptable Means of Compliance (AMC) but this is just one way of complying and alternative means may be available. The onus is on the person / organisation to demonstrate that their selected method is satisfactory. Goal-based regulation is frequently supplemented with 'guidance' and this guidance can come pretty close to being an AMC. Furthermore the so-called 'goals' can be prescriptive in themselves.

Personally I think the danger comes more from under-regulation or poor regulation than from a shift from prescription to goals. I work mostly in aerospace where a lot of international oversight combined with the fear of horrendous consequences and publicity seems to keep things at a reasonable level. However I can see that in other areas a move to goals / arguments could be seen as relaxation and a cause for concern (especially with the word association that I mentioned above).

I believe some answers lay in ensuring that responsibility and accountability lay in the 'just' places. If we follow prescriptive regulation mechanically and something goes wrong it must be down to the regulator? If we state that we have done enough to meet the goals of the regulation but in fact have made substandard preparations then woe betide us. Shouldn't the latter make us more focussed on safety?

For what its worth I would prefer that we used "Safety Strategy" rather than a "Safety Argument". As for safety cases I would call them "Safety Expositions" as in them we are duty bound to lay all our cards on the table for the regulator to take a critical look at.

In general I don't believe we will find a "one size fits all" answer. In industries where all involved are stakeholders in safety and the participants have by and large bought into the concepts then goal-based can (and does!) work. In situations where individuals can use 'arguments' to turn the basic principles of safety on their head - please stick to prescription. I have heard too many 'safety arguments' that it is safer to drive at 100mph rather than stick to the prescriptive limit to know that goal-based speed regulation could ever work!

Mike

On 03/02/14 08:07, Patrick Graydon wrote:
> On 3 Feb 2014, at 02:36, Matthew Squair <mattsquair_at_xxxxxx >
>> There is for example experimental evidence going back to Slovic and Fischoffs work in the 70s and Silveras follow up work in the 00s on how the structuring of fault trees can lead to an effect known as omission neglect, see here (http://wp.me/px0Kp-1YN) for further discussion of the effect. I see no reason why such graphical techniques as GSN should be immune to the same problem, or safety cases in the broader sense.
> I don’t see how those experiments (either the original or the follow-up work) are particularly relevant. In all of them, the subjects were given the fault trees and told to use them as an aid to a subsequent task; the experimenters were measuring how presentation in them biased their performance in that task. But in none of them was anyone explicitly tasked with checking the given fault trees, as an ISA or a regulator would a safety case. Because no-one took on the role of a skeptical critic, I don’t see the experimental context as particularly analogous to safety-case regulatory regimes.
>
> Moreover, if this was really to weigh in on the question of whether a safety case regime systematically accepts more shoddy systems after regulator/ISA review than a so-called ‘prescriptive’ system would, the experimental context would have to clearly be more analogous to the context of one of those than the other. But in *both* we have people presenting information (that might be framed one way or another) to regulators/assessors.
>
> Don’t get me wrong, I am not claiming to have the answer here. But I find the evidence that has been offered to date so weak as to be useless. I second Drew’s call for serious, systematic study of this.
>
> As to arguments that a system is unsafe, could you explain how that would work? Trying to discover all of the ways that a system is dangerous is a good way to find them, as trying to discover all of the ways that an argument is flawed is how we find flaws in arguments (safety and otherwise). But what are the criteria on which we decide whether something is good enough?
>
> This approach seems to be a case of demonstrating a negative. In an inductive argument, you do this by showing how many possibilities you have examined and discarded. E.g., if I wanted to claim that there are no Ferraris in my bedroom, I could back that up by claiming that I have looked into every space in that room big enough to hold one in such a way that I would expect to see one if it was there and that my search revealed nothing. In the case of safety, wouldn’t you have to argue over how you’d gone about looking for hazards (and dealt with all you’d found), how you’d gone about looking for causes to those (and dealt with all of those), how you’d gone about verifying that your system as deployed did what your analysis (and the resulting safety requirements) required, etc. This sounds an awful lot to me like the standard guidance for safety case structure. Or do you have something else in mind?
>
> — Patrick
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Feb 03 2014 - 12:32:07 CET

This archive was generated by hypermail 2.3.0 : Sun Feb 17 2019 - 20:17:06 CET