# Re: [SystemSafety] Software reliability (or whatever you would prefer to call it) [UNCLASSIFIED]

From: Matthew Squair < >
Date: Tue, 17 Mar 2015 21:35:37 +1100

Yep, and your last point reiterates what I think is the nub of the issue, we presently have to substitute a procedural controls, for after that's what a coding standard is, for the natural controls imposed by physical layout and needing to use a set of discrete components.

On Tue, Mar 17, 2015 at 8:28 PM, King, Martin (NNPPI) < martin.king2_at_xxxxxx

> This message has been marked as UNCLASSIFIED by King, Martin (NNPPI)
>
> Matthew
>
>
>
> We always have had hardware design standards, even for arrays of TTL LSI
> chips (number of capacitors, track sizes spacing etc). Some organisations
> have more formal and restrictive practices than others. With HDLs the
> relationship between gate connectivity (and layout, allocation etc) and the
> HDL source code can be quite tenuous as you allude. Where we need to
> perform high levels of verification of HDL we have quite restrictive HDL
> ‘coding standards’.
>
>
>
>
>
> (Note that the new Def Stan 00-55 [Interim Issue 3] covers all
> customisable parts. It uses the term ‘unintended behaviour’ to describe
> the circumstances where a programmed device fails to perform the expected
> function, as does 00-56).
>
>
>
> *Martin King *
>
> (my opinion not necessarily that of my employer)
>
>
>
>
>
>
>
> *From:* Matthew Squair [mailto:mattsquair_at_xxxxxx > *Sent:* 16 March 2015 22:34
> *To:* King, Martin (NNPPI)
> *Cc:* systemsafety_at_xxxxxx > *Subject:* Re: [SystemSafety] Software reliability (or whatever you would
> prefer to call it) [UNCLASSIFIED]
>
>
>
> The flip side of the coin is that with the advent of HDLs you can now
> treat hardware design in a similar fashion to software design. Whether you
> should is another question entirely. Here's what DO 254 says re HDL, "The
> guidance of this document is applicable for design assurance for designs
> using an HDL representation..."
>
>
>
> To give one example, making a gate change on a schematic design is
> relatively straight forward, but try and make that change in a HDL and
> synthesise it you may find other unexpected changes, as how the HDL
> correlates to the output netlist can be difficult to establish.
>
>
>
> Don't get me wrong you can have truly awful schematic expressed designs,
> but a well thought out and laid out schematic drawing is actually fairly
> easy to review. Translate that into HDL code and your job as reviewer gets
> that much more difficult, essentially the greater semantic distance makes
> it harder to understand what's going on.
>
>
>
> Maybe I'm just getting old and grumpy, but is getting to a situation where
> just like software you have to enforce 'coding standards' really where we
> want to be in hardware design?
>
>
>
>
>
> On Mon, Mar 16, 2015 at 7:39 PM, King, Martin (NNPPI) <
> martin.king2_at_xxxxxx >
> This message has been marked as UNCLASSIFIED by King, Martin (NNPPI)
>
> It is my understanding that this originally arose (in both 61508 and the
> UK Defence Standards) because when they were originally drafted pure
> hardware systems tended to be much simpler that they are today, and that if
> complicated algorithms etc were required then a software based system was
> the only way to go. ICs could not be particularly complex (eg the 68000
> was a new complex processor part, 16k*6 memory devices were about as big as
> it got) and the design and test tools were very simple compared to today.
>
>
>
> *Martin King *
>
> (my opinion not necessarily that of my employer)
>
>
>
> The following attachments and classifications have been attached:
>
> The following attachments and classifications have been attached:
>
> *From:* systemsafety-bounces_at_xxxxxx > systemsafety-bounces_at_xxxxxx > Bertrand (SAGEM DEFENSE SECURITE)
> *Sent:* 11 March 2015 14:37
> *To:* GRAZEBROOK, Alvery N; Littlewood, Bev
> *Cc:* systemsafety_at_xxxxxx > *Subject:* Re: [SystemSafety] Software reliability (or whatever you would
> prefer to call it)
>
>
>
> Hi All,
>
>
>
> I am also somewhat puzzled by the two aspects of the situation in IEC
> 61508 in particular, and on the market in general, not knowing which one is
> the egg and the hen :
>
> · First, the segregated approach of IEC 61508 between HW and SW
> misses (for the moment as this is discussed for edition 3) the complexity
> of the interaction between HW and SW and the potentially unwanted emerging
> properties at system level. A simple example is the fact that isomorphism
> issues between HW architecture and SW architecture are even not foreseen.
> This a significant weakness. Any tentative to try to keep the two worlds so
> separated seems clearly not going to help to improve system safety.
>
> · Second, there is a heavy pressure on the market from the
> manufacturer’s side, to build a concept of “composability” of the equipment
> properties to automatically obtain the requested properties at system
> level, from both software and hardware components. This seems to be absurd
> and dangerous because of Gödel theorem.
>
>
>
> I thus support this approach to talk about “complex design”. Knowing that
> complexity emerges very soon with apparently simple functionalities…
>
>
>
> Bertrand Ricque
>
> Program Manager
>
> Optronics and Defence Division
>
> Sights Program
>
> Mob : +33 6 87 47 84 64
>
> Tel : +33 1 58 11 96 82
>
> Bertrand.ricque_at_xxxxxx >
>
>
> *From:* systemsafety-bounces_at_xxxxxx > mailto:systemsafety-bounces_at_xxxxxx > <systemsafety-bounces_at_xxxxxx > Alvery N
> *Sent:* Tuesday, March 10, 2015 4:33 PM
> *To:* Littlewood, Bev
> *Cc:* systemsafety_at_xxxxxx > *Subject:* Re: [SystemSafety] Software reliability (or whatever you would
> prefer to call it)
>
>
>
> Hi Bev.
>
>
>
> Thanks for addressing the issue of language / terminology.
>
>
>
> In the world of embedded control systems, I have seen various attempts to
> dodge standards for design, by playing with the semantics around the word
> “Software”. There are two specific classes of dodging I can think of,
>
> 1. – using programmable electronics or high-state digital circuitry
> and claiming that software design practices don’t apply. In civil aero
> world they introduced DO-254 in addition to DO-178 to cover this.
>
> 2. – using data tables to describe behaviour, and claiming that only
> the table interpreter not the contents are software.
>
> I’m sure list members will think of other examples. If the language of the
> standards talked of “system behaviour” or “design behaviour” including
> Software, I think this would remove such issues.
>
>
>
> My feeling is that it would be helpful to talk of “complex design”
> including the software, attached electronics, and if applicable
> complexities in the controlled equipment and “plant”, and consider the
> (systematic) design reliability of all of this. Separating the part that is
> labelled as “software” from its electronic and physical world context isn’t
>
>
>
> This sits alongside the “traditional” component reliability approaches
> that deal with the (non-systematic) failure of equipment due to limited
> life, damage, random failure etc.
>
>
>
> **Note: these are my personal opinions, not necessarily those of my
> employer**
>
>
>
> Cheers,
>
> Alvery.
>
>
>
> *From:* systemsafety-bounces_at_xxxxxx > mailto:systemsafety-bounces_at_xxxxxx > <systemsafety-bounces_at_xxxxxx > Bev
> *Sent:* 10 March 2015 11:45 AM
> *To:* C. Michael Holloway
> *Cc:* systemsafety_at_xxxxxx > *Subject:* Re: [SystemSafety] Software reliability (or whatever you would
> prefer to call it)
>
>
>
> Hi Michael
>
>
>
> Seems you *are* speaking for Nick! (see his most recent posting) Of
> course the distinction you make here is an important one - I think we can
> all agree on that. Not least because our actions in response to seeing
> failures from them will be different (in the case of design faults - inc.
> software faults - we might wish to remove the offending fault).
>
>
>
> But excluding design faults as a source of (un)reliability results in a
> very restrictive terminology. I realise that appealing to “common sense” in
> a technical discussion is often the last refuge of the scoundrel… But I
> don’t think that the man in the street, contemplating his broken-down car
> (in the rain - let’s pile on the pathos!), would be comforted to be told it
> was not unreliable, it just had *design* faults.
>
>
>
> And, of course, your interpretation seems to rule out the contribution of
> human fallibility (e.g. pilots) to the reliability and/or safety of
> systems. This seems socially unacceptable, at least to me.
>
>
>
> Cheers
>
>
>
> Bev
>
>
>
>
>
> On 10 Mar 2015, at 10:34, C. Michael Holloway <c.m.holloway_at_xxxxxx > wrote:
>
>
>
> I can't speak for Nick, but I object to the use of the term "reliability"
> being applied to anything other than failures (using the term loosely)
> resulting from physical degradation over time. I believe it is important
> to maintain a clear distinction between undesired behavior designed into a
> system, and undesired behavior that arises because something ceases to
> function according to its design. (Here "designed / design" is used
> broadly. It includes all intellectual activities from requirements to
> implementation.)
>
> --
>
> *c**M**h*
>
> *C. Michael Holloway*
> The words in this message are mine alone; neither blame nor credit NASA
> for them.
>
>
>
> On 3/10/15 5:50 AM, Peter Bishop wrote:
>
> Now I think I understand your point.
> You just object to the term *software* reliability
>
> If the term was *system* reliability in an specified
> operational environment, and the system contained software
> and the failure was always caused by software
> - I take it that would be OK?
>
> A alternative term like *software integrity* or some such would be needed
> to describe the property of being correct or wrong on a given input.
> (In a lot of mathematical models this is represented as a "score function"
> that is either true or false for each possible input)
>
> Peter Bishop
>
> Nick Tudor wrote:
>
> Now back in the office...for a short while.
>
> Good point David - well put.
> I would have responded: There exists a person N who knows a bit about
> mathematics. Person N applies some mathematics and asserts Truth.
> Unfortunately, because of the incorrect application of the mathematics, the
> claims N now makes cannot be relied upon. The maths might well be correct,
> but the application is wrong because - and I have to say it yet again - the
> application misses fails to acknowledge that it is the environment that is
> random rather than the software. Software essentially boils down to a
> string of one's and nought's. Given the same inputs (and that always comes
> from the chaotic environment) then the output will always be the same. It
> therefore makes no sense to talk about 'software reliability'.
>
> Nick Tudor
> Tudor Associates Ltd
> Mobile: +44(0)7412 074654
> www.tudorassoc.com <http://www.tudorassoc.com>
> <http://www.tudorassoc.com/>
> *
> *
> *77 Barnards Green Road*
> *Malvern*
> *Worcestershire*
> *WR14 3LR**
> Company No. 07642673*
> *VAT No:116495996*
> *
> *
> *www.aeronautique-associates.com <http://www.aeronautique-associates.com>
> <http://www.aeronautique-associates.com/>*
>
> On 9 March 2015 at 12:26, David Haworth <david.haworth_at_xxxxxx > <mailto:david.haworth_at_xxxxxx > wrote:
>
> Peter,
>
> there's nothing wrong with the mathematics, but I've got
> one little nit-pick about its application in the real world.
>
> The mathematics you describe gives two functions f and g,
> one of which is the model, the other is the implementation.
>
> In practice, your implementation runs on a computer and so the
> domain and range are not "the continuum". If your model is
> mathematical
> (or even runs on a different computer), the output of one will
> necessarily be different from the output of the other. That
> may not be a problem in the discrete sense - you simply specify a
> tolerance t > 0 in the form of:
>
> Corr-f-g(i) = 0 if and only if |f(i)-g(i)| < t
>
> etc.
>
> The problem becomes much larger in the real world of control
> systems where the output influences the next input of the
> sequence. The implementation and the model will tend to drift
> apart. In the worst case what might be nice and stable in the
> model might exhibit unstable behaviour in the implementation.
>
> You're then in the subject of mathematical chaos, where a
> perfectly deterministic system exhibits unstable and unpredictable
> behaviour. However, this email is too small to describe it. :-)
>
> Cheers,
> Dave
>
> On 2015-03-09 11:48:57 +0100, Peter Bernard Ladkin wrote:
> > Nick,
> >
> > Consider a mathematical function, f with domain D and range R.
> Given input i \in D, the output is f(i).
> >
> > Consider another function, g, let us say for simplicity with the
> same input domain D and range R.
> >
> > Define a Boolean function on D, Corr-f-g(i):
> >
> > Corr-f-g(i) = 0 if and only if f(i)=g(i);
> > Corr-f-g(i) = 1 if and only if f(i) NOT-EQUAL g(i)
> >
> > If X is a random variable taking values in D, then f(X), g(X) are
> random variables taking values in
> > R, and Corr-f-g(X) is a random variable taking values in {0,1}.
> >
> > If S is a sequence of values of X, then let Corr-f-g(S) be the
> sequence of values of Corr-f-g
> > corresponding to the sequence S of X-values.
> >
> > Define Min-1(S) to be the least place in Corr-f-g(S) containing a
> 1; and to be 0 if there is no such
> > place.
> >
> > Suppose I construct a collection of sequences S.i, each of length
> 1,000,000,000, by repeated
> > sampling from Distr(X). Suppose there are 100,000,000 sequences I
> construct.
> >
> > I can now construct the average of Min-1(S) over all the
> 1,000,000,000sequences S.i.
> >
> > All these things are mathematically well-defined.
> >
> > Now, suppose I have deterministic software, S. Let f(i) be the
> output of S on input i. Let g(i) be
> > what the specification of S says should be output by S on input
> i. Corr-f-g is the correctness
> > function of S, and Mean(Min-1(S)) will likely be very close to
> the mean time/number-of-demands to
> > failure of S if you believe the Laws of Large Numbers.
> >
> > I have no idea why you want to suggest that all this is
> nonsensical and/or wrong. It is obviously
> > quite legitimate well-defined mathematics.
> >
> > PBL
> >
> > Prof. Peter Bernard Ladkin, Faculty of Technology, University of
> Bielefeld, 33594 Bielefeld, Germany
> > Je suis Charlie
> > Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de
> <http://www.rvs.uni-bielefeld.de> <http://www.rvs.uni-bielefeld.de/>
> >
> >
> >
> >
> > _______________________________________________
> > The System Safety Mailing List
> > systemsafety_at_xxxxxx > <mailto:systemsafety_at_xxxxxx > <systemsafety_at_xxxxxx >
> --
> David Haworth B.Sc.(Hons.), OS Kernel Developer
> david.haworth_at_xxxxxx > <david.haworth_at_xxxxxx > Tel: +49 9131 7701-6154 <tel:%2B49%209131%207701-6154>
> <%2B49%209131%207701-6154> Fax:
> -6333 Keys: keyserver.pgp.com
> <http://keyserver.pgp.com> <http://keyserver.pgp.com/>
> Elektrobit Automotive GmbH Am Wolfsmantel 46, 91058
> Erlangen, Germany
> Geschäftsführer: Alexander Kocher, Gregor Zink Amtsgericht
> Fürth HRB 4886
>
> Disclaimer: my opinion, not necessarily that of my employer.
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx > <mailto:systemsafety_at_xxxxxx > <systemsafety_at_xxxxxx >
>
>
> ------------------------------------------------------------------------
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx >
>
>
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx >
>
>
> _______________________________________________
>
> Bev Littlewood
> Professor of Software Engineering
> Centre for Software Reliability
> City University London EC1V 0HB
>
> Phone: +44 (0)20 7040 8420 Fax: +44 (0)20 7040 8585
>
> Email: b.littlewood_at_xxxxxx >
> http://www.csr.city.ac.uk/
> _______________________________________________
>
>
>
>
>
> This email and its attachments may contain confidential and/or privileged information. If you have received them in error you must not use, copy or disclose their content to any person. Please notify the sender immediately and then delete this email from your system. This e-mail has been scanned for viruses, but it is the responsibility of the recipient to conduct their own security measures. Airbus Operations Limited is not liable for any loss or damage arising from the receipt or use of this e-mail.
>
>
>
> Airbus Operations Limited, a company registered in England and Wales, registration number, 3468788. Registered office: Pegasus House, Aerospace Avenue, Filton, Bristol, BS34 7PA, UK.
>
>
>
>
>
>
>
> #
> " Ce courriel et les documents qui lui sont joints peuvent contenir des
> informations confidentielles, être soumis aux règlementations relatives au
> contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont
> pas destinés, nous vous signalons qu'il est strictement interdit de les
> divulguer, de les reproduire ou d'en utiliser de quelque manière que ce
> soit le contenu. Toute exportation ou réexportation non autorisée est
> interdite.Si ce message vous a été transmis par erreur, merci d'en informer
> l'expéditeur et de supprimer immédiatement de votre système informatique ce
> courriel ainsi que tous les documents qui y sont attachés."
> ******
> " This e-mail and any attached documents may contain confidential or
> proprietary information and may be subject to export control laws and
> regulations. If you are not the intended recipient, you are notified that
> any dissemination, copying of this e-mail and any attachments thereto or
> use of their contents by any means whatsoever is strictly prohibited.
> Unauthorized export or re-export is prohibited. If you have received this
> e-mail in error, please advise the sender immediately and delete this
> e-mail and all attached documents from your computer system."
> #
>
>
> The data contained in, or attached to, this e-mail, may contain
> confidential information. If you have received it in error you should
> notify the sender immediately by reply e-mail, delete the message from your
> system and contact +44 (0) 1332 622800(Security Operations Centre) if you
> need assistance. Please do not copy it for any purpose, or disclose its
> contents to any other person.
>
> An e-mail response to this address may be subject to interception or
> monitoring for operational reasons or for lawful business practices.
>
> (c) 2015 Rolls-Royce plc
>
> Registered office: 62 Buckingham Gate, London SW1E 6AT Company number:
> 1003142. Registered in England.
>
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx >
>
>
>
>
> --
>
> *Matthew Squair*
>
> MIEAust CPEng
>
>
>
> Mob: +61 488770655
>
> Email: MattSquair_at_xxxxxx >
> Website: www.criticaluncertainties.com <http://criticaluncertainties.com/>
>
>
>
>
> The data contained in, or attached to, this e-mail, may contain
> confidential information. If you have received it in error you should
> notify the sender immediately by reply e-mail, delete the message from your
> system and contact +44 (0) 1332 622800(Security Operations Centre) if you
> need assistance. Please do not copy it for any purpose, or disclose its
> contents to any other person.
>
> An e-mail response to this address may be subject to interception or
> monitoring for operational reasons or for lawful business practices.
>
> (c) 2015 Rolls-Royce plc
>
> Registered office: 62 Buckingham Gate, London SW1E 6AT Company number:
> 1003142. Registered in England.
>

--
*Matthew Squair*
MIEAust CPEng

Mob: +61 488770655
Email: MattSquair_at_xxxxxx
Website: www.criticaluncertainties.com <http://criticaluncertainties.com/>

_______________________________________________
The System Safety Mailing List
systemsafety_at_xxxxxx

Received on Tue Mar 17 2015 - 11:35:53 CET

This archive was generated by hypermail 2.3.0 : Thu Apr 25 2019 - 04:17:07 CEST