Re: [SystemSafety] a discursion stimulated by recent discussions of alleged safety-critical software faults in automobile software

From: Steve Tockey < >
Date: Wed, 13 Nov 2013 02:25:45 +0000

Les Chambers wrote:
"I just read Chris Hadfield's book: An Astronaut's Guide to Life on Earth (highly recommended). He gives the three criteria for selecting an astronaut:

1. The ability to understand complex things
2. A healthy body
3. The ability to make right decisions when it really matters
It seems to me NASA selection panels are lazy. They go straight for test pilots. By virtue of the fact that the guy standing in front of you is alive and kicking he has ticked all three boxes. I wish there were some analogue of this in software development."

Maybe I'm sounding like the proverbial broken record here, but this is precisely what the Software Engineering Body of Knowledge (SWEBOK, see www.swebok.org) is trying to accomplish. It's a consensus-based catalog of the skills and knowledge that differentiate what I often call "highly paid amateur programmers" from true, professional-grade Software Engineers.

From: Les Chambers <les_at_xxxxxx Date: Monday, November 11, 2013 6:07 PM
Subject: RE: [SystemSafety] a discursion stimulated by recent discussions of alleged safety-critical software faults in automobile software

What bothers me is the alarming repeat performances we have of these disasters. And the eye-watering sums of money spent on forensics and retribution. These events are typically passed over to the legal profession who proceed to dine out on the assignation of blame. The Queensland government just spent $5 million on a blame oriented judicial enquiry into the failure of a payroll system. The public now knows who is to blame. But where does this help the people who didn't get paid and other unfortunate innocents who will suffer from crap code in the future? Blame is a delicious emotion but it has no remedial effect. It depresses me that 5 million bucks could have gone a long way to fixing the problem in fairly short order.
"I have a dream that one day ..." The orders of magnitude of money spent on "I got a rope .. Let's hang em high" might be devoted to educating software developers on safe practices and informing the people who allocate the capital to activities like code review so they at least know what they don't know.
NASA gets a mention in this scenario. I don't understand how they would possibly get involved without having complete access to all the source code. Are they that hungry for consulting income? They are in so many other respects such a wonderful organisation. I just read Chris Hadfield's book: An Astronaut's Guide to Life on Earth (highly recommended). He gives the three criteria for selecting an astronaut:

1. The ability to understand complex things
2. A healthy body
3. The ability to make right decisions when it really matters
It seems to me NASA selection panels are lazy. They go straight for test pilots. By virtue of the fact that the guy standing in front of you is alive and kicking he has ticked all three boxes. I wish there were some analogue of this in software development. Cheers
Les

Sent: Tuesday, November 12, 2013 7:27 AM To: Steve Tockey
Cc: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] a discursion stimulated by recent discussions of alleged safety-critical software faults in automobile software

I prefer the analogy of trying to push an elephant with a piece of (cooked) spaghetti.

The thing is, and taking the plaintiffs allegation as true, Company X had to really 'work' at making their software as bad as it was. There were, on paper at least, a series of safety mechanisms each of which needed to be negated in some fashion. How a company or departmental or team culture arose that could foster such normalised deviance is the question for me.

Rules, processes and practices are great, but they need a culture of compliance. That in turn needs a corporate understanding of the difference between governance and management. Most management in my experience struggle with that difference, and quite a few boards as well.

Matthew Squair

MIEAust, CPEng
Mob: +61 488770656
Email; Mattsquair_at_xxxxxx Web: http://criticaluncertainties.com

On 12 Nov 2013, at 6:49 am, Steve Tockey <Steve.Tockey_at_xxxxxx

Martyn wrote:
"But a man can dream and, if such a set of circumstances were ever to
arise, why would I care whether the bad software did actually cause the accident?"

I, for one, would care if the damage of said accident happened to *me*...

My concern is that it's the sorry state of software development practices that leads to these safety vulnerabilities (and the vast majority of those other irritating defects) in the first place. As I've said before, the practices needed to develop safety/mission critical software can--for the most part--deliver high quality software at lower cost and shorter schedule than 'standard practice'. These problems are a direct result of the sloppy, immature, UNPROFESSIONAL approach that most dev groups take. Doing the job right is not only easier, it's better, faster, and cheaper. But, as we say in the US, 'it's like trying to push a rope uphill'. Day-to-day amateur practitioners aren't going to care about doing a good job until some obvious, high-profile disaster can be pinned directly on the crappy level of standard practice.

Cheers,

-----Original Message-----
From: Martyn Thomas <martyn_at_xxxxxx Date: Monday, November 11, 2013 6:39 AM
To: "systemsafety_at_xxxxxx <systemsafety_at_xxxxxx Subject: [SystemSafety] a discursion stimulated by recent discussions of alleged safety-critical software faults in automobile software

(I'm writing this in England. We don't have a constitution that guarantees freedom-of-expression. Indeed, we have become a favourite destination for libel tourists. )

Let's suppose that in a purely fictional sequence of events, a manufacturer that develops and sells safety-related consumer products installs some very badly written software in one of their products: software that could lead to injury or death. Let's further suppose that an accident happens that, when investigated, turns out to be of the sort that the bad software could have caused.

Let's speculate that n this fictional case, the manufacturer suffers serious penalties and as a result vows to write much better software in future, changes their development methods, significantly reduces the likelihood of safety-related errors in their future products, and (by acting as a warning to others of the consequences) influences other companies to make similar improvements.

That would be a lot of good things that resulted from the discovery of the badly-written software and most or all of them might not have happened if the bad software had been discovered without an accident and a finding of liability.

Of course, this is fiction and the good outcomes described above are hypothetical.

But a man can dream and, if such a set of circumstances were ever to arise, why would I care whether the bad software did actually cause the accident?

Martyn



The System Safety Mailing List
systemsafety_at_xxxxxx

The System Safety Mailing List
systemsafety_at_xxxxxx


The System Safety Mailing List
systemsafety_at_xxxxxx Received on Wed Nov 13 2013 - 08:50:15 CET

This archive was generated by hypermail 2.3.0 : Fri Feb 22 2019 - 04:17:06 CET