Re: [SystemSafety] NTSB report on Boeing 787 APU battery fire at Boston Logan

From: Les Chambers < >
Date: Sun, 7 Dec 2014 16:31:04 +1000


Matthew  

Two relevant quotes:  

  1. We can know more than we can tell

Philosopher Michael Polanyi argued that computers will never be fully programmable to perform all human tasks because humans do not know how they achieve their higher cognitive skills. We can know more than we can tell. There will never be an algorithm for judgement and common sense.

Conclusion: although there were plenty of experts in the world that could have told Boeing they were going to have a problem, none were employed on the 787. And even if they were, expounding commonsense and pushing it up the line to the people that allocate the capital for proper manufacturing and testing is always a problem.  

2 Maybe they were all just sleep deprived

Artificial intelligence researchers have been tinkering with reinforcement learning for decades. But until DeepMind’s Atari demo (it taught itself to play Space Invaders at super human expert level), no one had built a system capable of learning anything nearly as complex as how to play a computer game, says Hassabis. One reason it was possible was a trick borrowed from his favourite area of the brain. Part of the Atari-playing software’s learning process involved replaying its past experiences over and over to try and extract the most accurate hints on what it should do in the future. “That’s something that we know the brain does,” says Hassabis. “When you go to sleep your hippocampus replays the memory of the day back to your cortex.”  

Cheers

Les  

From: systemsafety-bounces_at_xxxxxx Sent: Sunday, December 7, 2014 4:01 PM
To: Mike Ellims
Cc: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] NTSB report on Boeing 787 APU battery fire at Boston Logan  

John Downer (whose on the list I think) coined the phrase 'epistemic accident' to cover accidents which are due to our knowledge being contingent as much on on theories, and assumptions, as facts. Said theories which may then prove to be not quite good enough. Apologies if I mangle the definition a bit.  

I also noticed the NTSB homed in on the need to surface assumptions and make them explicit. Assumptions of course being the epistemic equivalent of whoopy cushions in engineering. :)

Matthew Squair  

MIEAust, CPEng

Mob: +61 488770655

Email; Mattsquair_at_xxxxxx

Web: http://criticaluncertainties.com  

Matthew Squair  

MIEAust, CPEng

Mob: +61 488770655

Email; Mattsquair_at_xxxxxx

Web: http://criticaluncertainties.com

On 6 Dec 2014, at 2:30 am, Mike Ellims <michael.ellims_at_xxxxxx

In the Guardian Gawande states..

" There was an essay that I read two decades ago that I think has influenced almost every bit of writing and research I've done ever since. It was by two philosophers - Samuel Gorovitz and Alasdair MacIntyre - and their subject was the nature of human fallibility. They wondered why human beings fail at anything that we set out to do. Why, for example, would a meteorologist fail to correctly predict where a hurricane was going to make landfall, or why might a doctor fail to figure out what was going on inside my son and fix it? They argued that there are two primary reasons why we might fail. The first is ignorance: we have only a limited understanding of all of the relevant physical laws and conditions that apply to any given problem or circumstance. The second reason, however, they called "ineptitude", meaning that the knowledge exists but an individual or a group of individuals fail to apply that knowledge correctly."

However I think that Gorovitz and MacIntyre argue something very different, the following is I believe the essence of their argument. I have edited it because the paper is very long and not the easiest of reads.

  {First they discuss where our traditional views of error come from i.e. the natural sciences}

For on this view all scientific error will arise either from the limitations of the present state of natural science-that is, from ignorance or from the willfulness or negligence of the natural scientist-that is, from ineptitude. This classification is treated as exhaustive.

<snip>

This view of ignorance and ineptitude as the only sources of error has been transmitted from the pure to the applied sciences, and hence, more specifically, from medical science to medical practice viewed as the application of what is learned by medical science.

<snip>

   {they then go on to look at the issue that doctors - and engineers face dealing }

   { with particular situations, EMPHISIS ADDED below }

Precisely because our understanding and expectations of particulars cannot be fully spelled out merely in terms of law like generalizations and initial conditions, the best possible judgment may always turn out to be erroneous, and erroneous not merely because our science has not yet progressed far enough or because the scientist has been either willful or negligent, but because of the necessary fallibility of our knowledge of particulars.   <snip>
The recognition of this element of necessary fallibility IMMEDIATELY DISPOSES OF THAT TWOFOLD CLASSIFICATION of the sources of error which we have seen both to inform natural scientists' understanding of their own practices and to be rooted in the epistemology that underlies that understanding. Error may indeed arise from the present state of scientific ignorance or from willfulness or negligence. But it may also arise precisely from this third factor, which we have called necessary fallibility in respect to particulars.

-----Original Message-----
From: systemsafety-bounces_at_xxxxxx [mailto:systemsafety-bounces_at_xxxxxx Peter Bernard Ladkin
Sent: 05 December 2014 11:58
To: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] NTSB report on Boeing 787 APU battery fire at Boston Logan

On 2014-12-05 12:36 , Martin Lloyd wrote:  

On 05/12/2014 10:52, Mike Ellims wrote:

Interestingly research suggests surgeons who expect things to go

wrong and plan for failure have much higher success rates.  

Does anyone have a reference to these research results?

Atul Gawande is giving the Reith Lectures at the moment on a closely related topic, namely how to improve the success rate of/avoid avoidable failures in medicine. A summary of the first is
http://www.theguardian.com/news/2014/dec/02/-sp-why-doctors-fail-reith-lectu re-atul-gawande The BBC page is
http://www.bbc.co.uk/programmes/articles/6F2X8TpsxrJpnsq82hggHW/dr-atul-gawa nde-2014-reith-lectures

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany
Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de



The System Safety Mailing List
systemsafety_at_xxxxxx
---
This email is free from viruses and malware because avast! Antivirus protection is active.
http://www.avast.com

_______________________________________________
The System Safety Mailing List
systemsafety_at_xxxxxx






_______________________________________________ The System Safety Mailing List systemsafety_at_xxxxxx
Received on Sun Dec 07 2014 - 07:31:25 CET

This archive was generated by hypermail 2.3.0 : Sat Feb 16 2019 - 18:17:06 CET