Re: [SystemSafety] power plant user interfaces

From: Peter Bernard Ladkin < >
Date: Tue, 14 Jul 2015 07:49:32 +0200


-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

On 2015-07-14 00:55 , Les Chambers wrote:
> A 'good' HMI therefore supports: observe-ability, understand-ability and control-ability If
> you like the devil is in the 'ilitys' .... Observe-ability A groovy HMI (with all the right
> contrast ratios and menu hierarchies) is useless if you don't have the instrumentation to
> observe what's going on in the process. Case study: QF32 would not have had an engine
> explosion if Rolls-Royce did a mass balance around the lubricating oil flow in their jet
> engines.

One of the tropes in failure analysis is that there is always one indicator that would have told you what was going on. Or a number of them. Another trope is that you can't show everything reliably.

There are, for example, cognitive constraints (see below). Even if you had some way of knowing you had covered every eventuality (the elusive goal of completeness), there are simply too many parameters to display them all effectively. And you have to consider the reliability of the sensorics. If you include everything-and-the-kitchen-sink then the chances are that in any one incident some of the sensors will be displaying false or misleading values. Gergely's original post about TMI contains an example (judged by the authors he quoted to be "root"). There is a trade-off between quantity of displays and reliability of the total displayed information.

When one parameter is common to a number of failure incidents, it usually wins its place on the panel. The problem is that a number of failure incidents must occur for this to happen, and in safety-critical systems such incidents are (relatively) rare. It follows that figuring out what parameters get surveyed has the nature more of a thought experiment at design time, with experience playing a role only in so far as it can.

In the case of QF32, the manufacturer already gathers massive amounts of real-time data on running engines. It's part of their business model and one of their selling points. I would guess that reason they didn't gather data on the particular parameter you mentioned is that such eventualities with their new engines were considered adequately mitigated by manufacturing quality control measures. Indeed a concrete failure of manufacturing quality control as the cause of the incident was rapidly found.

For yet another example, there has been considerable public debate over decades about the value of including explicit angle-of-attack display in commercial transport aircraft.

Experience with a system tells you whether your design-time thought experiments on key parameters for rare events in the future were right. Even then, you don't know that the satisfactory outcome wasn't due to serendipity, that none of the things you had missed actually occurred.

> Understand-ability In response to those who might say, "Aw shucks these systems are highly
> complex these days and operators can easily get confused. Gees look at what happened at
> Chernobyl and Three Mile Island. " ... I say, "squeeze out the tears you sorry bugger."

I think that is giving inappropriately short shrift to a major issue which has been at the centre of HMI research for many decades.

Cognitive synthesis of display information was known to be an issue thirty years ago. The phenomenon of the control panel "lighting up like a Christmas tree" common across a variety of incidents and well-known amongst line pilots who kept up with their professional development. There was an article on synthesis of warning sounds in the Philosophical Transactions of the Royal Society which pointed out the cognitive limitations in discriminating simultaneous sounds. The research of David Woods and Nadine Sarter on the HMI of the A320 automation, which amongst other things identified the phenomenon which they called "mode confusion", is a quarter of a century old.

> Control-ability

[I have nothing to say.]

> A note on engineers not understanding user needs. In my experience this was solved by chemical
> plant engineers actually writing the control software after appropriate training in control
> theory and the target computer control systems.

I doubt that it was "solved". Just as it would not be solved if you had software engineers designing the chemical plant after appropriate training in chemistry.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de

-----BEGIN PGP SIGNATURE----- iQEcBAEBCAAGBQJVpKLsAAoJEIZIHiXiz9k+q5IIAKr08UnGaTCHZnA3BQQ1/uER gITMd3jOhfUMgs38Njyg1PyKzqZU3bKt8YEDlp8hLNihg67ATvYMfEmSmtO4cCql 3CY29zCGxoUiBMe9e6tUbTlF+a6pGV3c9VdktoT220QJec/VeooGYU69dZCRoQ8d 0XnFAQWlRJcv5Udfl3Az2k3eRjdLhPycNIRXUvvKTQh1YAwhBdHbJHSnxr93ShBB K6fOLDkknb9QAgdfClP8Xrmwnwidao/sulVGdv+vb89qyE9uEpAPXly3zKb6rlFl DqZP8fT/PE3fDv1vD08rjf6rY9CAmm54H1yCQjWR4ZzVmGCnG7IegdtMpdJOkY0= =DF0N
-----END PGP SIGNATURE-----



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Tue Jul 14 2015 - 07:49:40 CEST

This archive was generated by hypermail 2.3.0 : Mon Apr 22 2019 - 00:17:07 CEST