Re: [SystemSafety] Separating critical software modules from non-critical software modules

From: Peter Bishop < >
Date: Wed, 24 Jul 2013 13:41:54 +0100


Yes it is similar to a PLC as it uses functions blocks (not sure if it precisely 61131 though)

The other features are
- simulation so application software can be tested before installation on the Teleperm XS
- application timing analysis before installation to ensure applications do not exceed their allocated time slots - Capability to implement N out of M vote architectures with data exchange over comms links

It is used on 74 nuclear power units including the new Areva EPR.

Bit of bumpf here (but not much detail).

http://www.areva-np.com/common/liblocal/docs/Brochure/TELEPERM-in_brief_08.pdf

Peter

NB usual disclaimers - no financial interests in plugging this system.

Thierry.Coq_at_xxxxxx

> Hi, Peter
> Function block languages like the one standardized in IEC 61131?
> This Areva TXS is like a PLC processor?
> 
> Best regards,
> Thierry Coq
> DNV
> 
> -----Original Message-----
> Sent: mercredi 24 juillet 2013 12:54
> To: systemsafety_at_xxxxxx
> Subject: Re: [SystemSafety] Separating critical software modules from non-critical software modules
> 
> Relatively simple schedulers have been used like the Siemens (now Areva TXS). But there is a strong emphasis on determinism (maximum time slots per process) separate i-o modules, etc. Also the applications are often defined at a higher level using a function block language which reduces the scope for application level errors.
> 
> Peter
> 
> 
> Ignacio González (Eliop) wrote:

>> Peter, I come from the railway world. In nuclear, you cannot even use
>> an OS with safe partitioning, Arinc-style, hypervisors, etc.?
>>
>>
>>
>>
>> With no physical separation you need to be very sure that
>> "non-safety" cannot affect safety (overwriting "safe" memory,
>> crashing the system, hogging cpu, comms...)
>> In nuclear standards, anything in the same box has to be implemented
>> to the level of the most critical function.
>>
>> Peter Bishop
>>
>>
>> <mailto:grcreech_at_xxxxxx >>
>> Malcolm,
>>
>> I agree that for small devices it may be difficult to provide or
>> prove separation between safety and non-safety and therefore all
>> needs to be considered safety in these cases.
>>
>> However, I am also a firm believer that removing complexity
>> increases safety. If I can prove, say 50% of the code, cannot
>> affect safety then I can focus on the 50% that does and not get
>> distracted on the areas that have less effect.
>>
>> Just because there is a safety & non-safety section, doesn’t
>> mean that the programming style needs to be different, after all
>> even in the non-safety section quality is important for any
>> product and the non-safety sections obviously need to be in the
>> document structure clearly documented as non-safety.
>>
>> Once the segregation is in place it has several benefits, for
>> example, although all code needs to be reviewed & tested (from a
>> quality point of view) why focus on software that could be
>> classed as black channel software components where the safety
>> aspect is assured elsewhere, the focus can be where it is
>> needed, complexity is reduced and the amount of important safety
>> code is reduced.
>>
>> This method has the added benefit that proven in use / COTS
>> firmware can be used in the non-safety area knowing that it is
>> unlikely to affect the safety firmware.
>>
>>
>> Best regards,
>>
>> Gerry Creech
>>
>>
>>
>>
>> From: "Watts Malcolm (AE/ENG11-AU)"
>> <systemsafety_at_xxxxxx >> <mailto:systemsafety_at_xxxxxx >> Date: 24/07/2013 02:01
>> Subject: Re: [SystemSafety] Separating critical software
>> modules from non-critical software modules
>> Sent by:
>>
>>
>> ----------------------------------------------------------------------
>> --
>>
>>
>>
>> Our exerience in automotive is that it is effectively impossible
>> for most automotive products to have the kind of separation José
>> speaks of; for example “two separate board groups”. Much of our
>> software (although not SIL4 – often SIL2 equivalent) runs on a
>> single micro in a single device. Very high integrity product
>> might have 2 independent micros in a single enclosure, with some
>> redundancy of function in other devices in the vehicle (for
>> example, data redundancy). Many of the micros used do not have
>> memory-protection units, and micros may be running only
>> scheduling executives, not full operating systems (in the
>> interests of simplicity, proven field use, and testability). In
>> this circumstance, it makes the most sense (to me) to develop
>> all of the software in the micro to the highest integrity level
>> required by any component.
>>
>> I share the concerns raised in to Myriam’s post; as a matter of
>> practicality, few developers are feasibly able to swap back and
>> forth between “safety” and “no-safety” development methodologies
>> (to say nothing of the cost and complexity of maintaining two
>> sets of procedures, two sets of training, duplicated QA, the
>> complexity of planning and tracking, and so on. To answer
>> Myriam’s rhetorical question; no, for me it does not make sense
>> that developers can swap back and forward between two different
>> mindsets without mistakes, and no, it does not make much sense
>> that tightly-coupled modules can be part of significantly
>> different lifecycles without adverse effects on interfaces,
>> assumptions, change management and quality requirements. [This
>> is the same problem faced when incorporating 3^rd -party
>> components. There’s a reason that such a high proportion of
>> defects are in the interfaces].
>>
>> The more conservative approach (taking into account possible
>> changes, and mistakes in understanding whether a component or
>> its interface is safety-relevant or not, under given
>> circumstances, is to develop all software components (in
>> tightly-coupled products typical of automotive) to the
>> highest-applicable integrity level.
>>
>> The benefit you get (in my opinion) is reduced risk due to
>> unexpected interference between modules, reduce risk due to
>> systematic defects, reduced risk due to human-factors effects
>> from the developers, reduced cost due to consistency, and
>> better/faster impact analysis on change.
>>
>> The flip side is increased cost and effort for all components
>> (and their integration ?) that could otherwise have been
>> considered “non-safety-relevant”. This really is a serious
>> disadvantage of the approach. Ignacio mentioned that this may
>> be practical only for small teams and “small software”. Does
>> anyone know of any research in this area ?
>>
>> Best Regards,
>>
>> Mal.
>> Mal Watts ^
>>
>> ----------------------------------------------------------------------
>> ----------------------------------------
>>
>> Functional Safety Manager (AE/ENG11-AU)
>> Robert Bosch (Australia) Pty. Ltd.
>> Automotive Energy and Body Systems,
>> Locked Bag 66 - Clayton South, VIC 3169 - AUSTRALIA
>> Tel: +61 3 9541-7877 <tel:%2B61%203%209541-7877> Fax:
>> +61 3 9541-3935
>> _ __www.bosch.com.au_ <http://www.bosch.com.au>
>>
>> [mailto:systemsafety-bounces_at_xxxxxx >> Behalf Of *José Faria*
>> Sent:* Tuesday, 23 July 2013 7:58 PM*
>> To:* M Mencke*
>> Cc:* systemsafety_at_xxxxxx >> <mailto:systemsafety_at_xxxxxx >> Subject:* Re: [SystemSafety] Separating critical software
>> modules from non-critical software modules
>>
>> Myriam,
>>
>> Yes, it is a valid approach. Valid meaning both technically
>> feasible and acceptable by certification authorities. As Gerry
>> said, the fundamental issue is to demonstrate that the lower SIL
>> level part cannot compromise the higher level part.
>>
>> In the systems I've worked the basic architecture solution was
>> to have 2 separate board groups for the SIL4 and SIL0 software.
>> In such a solution, you can find the guidance for the safety
>> analysis of the communication protocol between the two boards in
>> EN 50159 Annex A.
>>
>> Best,
>> José
>>
>>
>> Dear All,
>>
>> For any software development project, many software modules are
>> involved, where some are defined as safety critical, others are
>> not. For example, in railway signaling, communications modules
>> are likely to be defined as critical, whereas other modules such
>> as those involving data storage or other basic functions are
>> not. An analysis may be performed with the objective of
>> demonstrating that the safety critical modules are entirely
>> independent from the non critical modules, leading to the
>> conclusion that the application of a programming standard for
>> safety critical software is only required for those modules
>> defined as safety critical (note the phrase “with the objective
>> of demonstrating…”; I would hesitate before drawing the
>> conclusion that the analysis really demonstrates what it is
>> supposed to demonstrate).
>>
>> In my field the EN 50128 would be applied, however, it could be
>> any standard for safety critical software. Thus, the software is
>> developed applying the standard only to the modules which have
>> been defined as “safety critical”. In order to supposedly save
>> time/money, etc., the rest of the modules are developed as
>> non-critical software, either as SIL 0 functions or according to
>> a standard programming standard. My question is whether such an
>> approach is really valid, given that the application of a safety
>> critical standard does not only involve the application of
>> specific language features, it involves an entire development
>> life cycle, and I find it difficult to see how the modules
>> defined as “non-critical” then do not form part of that life
>> cycle. I’m not saying it is not valid, but I would like to know
>> how others see this.
>>
>> Additionally, if the same programmers are involved in the
>> programming of both critical and non-critical modules, does it
>> really make sense that they only pay attention to the features
>> required for safety critical software when programming the
>> critical modules, and modify their programming style for the
>> rest of the modules (or revert back to their “usual” style)?
>> These questions also depend on what you consider as critical,
>> for example, for a control system with a HMI, you could only
>> consider communication modules critical, however, you need a GUI
>> to display the status of the elements an operator has to control
>> correctly. Some operations performed by the operator may not
>> have the potential to generate a hazard with a high severity
>> level, because there are mitigations in place. However, that
>> doesn’t necessarily mean that the software responsible for
>> displaying the information should not be programmed according to
>> a safety critical standard. I am aware that these questions
>> don’t have an “easy” answer; any opinions would be appreciated.
>>
>> Kind Regards,
>>
>> Myriam.
>>
>> _______________________________________________
>> The System Safety Mailing List_
>> __systemsafety_at_xxxxxx >> <mailto:systemsafety_at_xxxxxx >>
>>
>>
>> --
>> --
>> *José Miguel Faria*
>> *Educed *- Engineering made better
>> t: +351 913000266 <tel:913000266>
>> w: _www.educed-emb.com_ <http://www.educed-emb.com/>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety_at_xxxxxx >> <mailto:systemsafety_at_xxxxxx >>
>>
>> _______________________________________________
>> The System Safety Mailing List
>>
>>
>>
>> _______________________________________________
>> The System Safety Mailing List
>> systemsafety_at_xxxxxx >> <mailto:systemsafety_at_xxxxxx >>
>>

>
-- 

Peter Bishop
Chief Scientist
Adelard LLP
Exmouth House, 3-11 Pine Street, London,EC1R 0JH
http://www.adelard.com
Recep:  +44-(0)20-7832 5850
Direct: +44-(0)20-7832 5855
_______________________________________________
The System Safety Mailing List
Received on Wed Jul 24 2013 - 14:42:18 CEST

This archive was generated by hypermail 2.3.0 : Wed Apr 24 2019 - 17:17:05 CEST