Re: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?

From: Mike Ellims < >
Date: Fri, 22 Apr 2016 10:49:52 +0100


Good morning Bertrand!  

Are you trying to wake me up or something?  

Let me try and rework the argument in outline (in the real world the detail would matter). At one level we have a conventional vehicle. So if we ignore the cyber driver aspect of the vehicle for the moment then it is obvious that the base vehicle is amenable to analysis under ISO 26262. Thus at this level the “standard” tools for building a safety related/critical system apply.  

There is the additional complexity in that there are at least two inputs into the main control streams (acceleration, deceleration, steering) but that is more or less the situation as it stands today i.e. brake control are shared between the driver and multiple electronic systems (ABS, ESP, emergency braking) and likewise current electronic systems arbitrate the accelerator input and to some extent the steering puts; i.e. many cars can now park themselves. Again all of this falls within the scope of ISO 26262.  

Next if we need to consider the new input into this base architecture of the automated driving system, assuming something like a Tesla that has a steering wheel.  

At the interface between the cyber driver and the squishy driver I think we can specify some safety goals (see below) and likewise apply the standard methods.

At the level of the architecture, hardware and “operating system” for the system and subsystems that directly implement different aspects of the cyber driver we should be able to do likewise. For example radar and ultrasonic subsystems I expect would have fairly standard designs and interfaces much like currently deployed systems such as adaptive cruise control, parking sensors (this is a bit of a simplification).  

The major problem comes with the software/hardware systems the attempt to mimic the difficult bits of the squishy driver i.e. the grey gunk in their head. To a first approximation that’s where many of the current norms (as writ in 16508/26262 etc.) usually applied to safety related systems perhaps go a bit pair shaped. However the is a quite a bit of experience with industrial systems that use neural nets etc. For example I just remembered that I have a copy of “Guidance For The Verification And Validation Of Neural Networks” hidden on my bookshelf (in the too read pile) with is a supplement to IEEE Std. 1012-1998 published in 2007. I expect state of the art has prob. advanced since then.  

Does that help clarify what I was trying to get across?  

Cheers.    

From: systemsafety [mailto:systemsafety-bounces_at_xxxxxx Sent: 22 April 2016 09:00
To: 'Bielefield Safety List'
Cc: systemsafety-bounces_at_xxxxxx Subject: Re: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

Hi Mike,  

Some comments on your text.  

“principles laid out in IEC 16508 and ISO 26262 probably carry across quite well e.g. safety goals/requirements for system architecture attributes such as fail silent/fail active,”  

These principles make sense only in front of a given failure mode, its consequence(s) at system level, and the associated safety requirements (e.g. the left wheel must not go to the left and the right wheel to the right at the same time). This means an analytical study of the system and the sub-system to make the proper LLR emerge at equipment level (in order to apply standards provisions).  

So either we have a top down design process, driven by HLR for safety and we prove that HLR and the LLR are satisfied and then assume the system is safe.  

Either we have the safety goals/objectives (don’t bump in another car) which are absolutely not HLR on sub-systems and even less LLR on equipment, and we design the car and operate it during (how many ?!) kilometres or hours and hope to have some unwanted behaviour emerge and correct then one by one and later bet that the safety goals are achieved.  

“safe failure fraction” is probably the most horrible engineering concept ever invented. We started to get rid out of it with the creation of Route 2H. I hope we will finish to kill as soon as possible.  

Bertrand Ricque

Program Manager

Optronics and Defence Division

Sights Program

Mob : +33 6 87 47 84 64

Tel : +33 1 58 11 96 82

Bertrand.ricque_at_xxxxxx  

From: Mike Ellims [mailto:michael.ellims_at_xxxxxx Sent: Thursday, April 21, 2016 5:47 PM
To: RICQUE Bertrand (SAGEM DEFENSE SECURITE); 'Bielefield Safety List' Cc: systemsafety-bounces_at_xxxxxx Subject: RE: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

> This approach might be « safe ». I guess nobody has experience on this type of process.
 

Mobileye has been around since 1999, Google have been letting cars drive themselves since 2009; I suspect they have probably got some experience by now. You would certainly hope so!  

> Whatever, it seems to have no intersection with the concept of satisfying safety requirements.
 

That is possibly true at the top level for the complete system where some sort of statistical criteria may be more appropriate. However at the subsystem level I think that quite a number, or perhaps all of the principles laid out in IEC 16508 and ISO 26262 probably carry across quite well e.g. safety goals/requirements for system architecture attributes such as fail silent/fail active, warning and degradation concept etc. At lower levels requirements on the software for the inference engine design and code and requirements are applicable. For hardware concepts such as safe failure fraction, failure detection percentage etc. would also be applicable.  

While having a dig around the interweb for information on Google’s self driving cars and the validation process I came across the following summary of drivers disengagements which gives a little insight into the process being used by Google and may be of interest and simulate further discussion.  

https://static.googleusercontent.com/media/www.google.com/en//selfdrivingcar/files/reports/report-annual-15.pdf <https://static.googleusercontent.com/media/www.google.com/en/selfdrivingcar/files/reports/report-annual-15.pdf>    

From: RICQUE Bertrand (SAGEM DEFENSE SECURITE) [mailto:bertrand.ricque_at_xxxxxx Sent: 21 April 2016 15:12
To: Mike Ellims; 'Bielefield Safety List' Cc: systemsafety-bounces_at_xxxxxx Subject: RE: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

This approach might be « safe ». I guess nobody has experience on this type of process.  

Whatever, it seems to have no intersection with the concept of satisfying safety requirements.  

Bertrand Ricque

Program Manager

Optronics and Defence Division

Sights Program

Mob : +33 6 87 47 84 64

Tel : +33 1 58 11 96 82

Bertrand.ricque_at_xxxxxx  

From: Mike Ellims [mailto:michael.ellims_at_xxxxxx Sent: Thursday, April 21, 2016 3:35 PM
To: RICQUE Bertrand (SAGEM DEFENSE SECURITE); 'Bielefield Safety List' Cc: systemsafety-bounces_at_xxxxxx Subject: RE: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

Bertrand Ricque wrote  

> Safety critical software is not a question of time. It is a question of hunting bugs, in particular in uneasy access corners,

> using dedicated methodologies, techniques and tools.
 

That is true only up to a point, doing a bit of digging it seems that the majority of these systems are built on machine learning systems, so how you train them is going to be a large part of how “dependable” they are. Thus even if the code that implements the systems neural network is perfect and is totally bug free (see below) the “dependability” of the final system is on how good the training and testing sets are which in turn is dependent on how many real world situations you can accumulate and present to the system.  

Hence Google’s approach of running around lots of cars to get as much information about road configurations, behaviour of other vehicles, issues (e.g. road signs obscured by bushes) as possible which they can then combine with their humongous database of all the worlds roads.  

Tesla appears to uses a vision system from Mobileye, who’s website states on their planning systems;  

<snip> First, we apply supervised learning for predicting the near future based on the present. We require that the predictor will be

differentiable with respect to the representation of the present. Second, we model a full trajectory of the agent using a

recurrent neural network, where unexplained factors are modeled as (additive) input nodes. <snip>      

From: systemsafety [mailto:systemsafety-bounces_at_xxxxxx Sent: 21 April 2016 13:37
To: Bielefield Safety List
Cc: systemsafety-bounces_at_xxxxxx Subject: Re: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

Safety critical software is not a question of time. It is a question of hunting bugs, in particular in uneasy access corners, using dedicated methodologies, techniques and tools.  

Say that you forgot to take into account in your software the fact that every 100 years bissextile years are not as every 4 years, you will never find it whatever the number of kilometres, cars and hours you use the system between 2001 and 2099…  

And whatever the good performance of your system during 99 years, there will be absolutely zero excuse for the consequent accidents …  

A good way to challenge the designers of such systems would be to make their children responsible for the damages …  

Bertrand Ricque

Program Manager

Optronics and Defence Division

Sights Program

Mob : +33 6 87 47 84 64

Tel : +33 1 58 11 96 82

Bertrand.ricque_at_xxxxxx  

From: systemsafety [mailto:systemsafety-bounces_at_xxxxxx Sent: Thursday, April 21, 2016 2:27 PM
To: Matthew Squair
Cc: Bielefield Safety List
Subject: Re: [SystemSafety] How Many Miles of Driving Would It Take to Demonstrate Autonomous Vehicle Reliability?  

This report has just come to my attention. Stats based and an interesting read as it addresses most of the points made on this thread in one way or another:  

http://www.rand.org/pubs/research_reports/RR1478.html

Nick Tudor

Tudor Associates Ltd

Mobile: +44(0)7412 074654

www.tudorassoc.com

Image supprimée par l'expéditeur.  

77 Barnards Green Road

Malvern

Worcestershire

WR14 3LR
Company No. 07642673

VAT No:116495996  

www.aeronautique-associates.com  

On 18 April 2016 at 22:01, Matthew Squair <mattsquair_at_xxxxxx

More that I don't see the value of multi million trip test programs that others might. ;)

Matthew Squair  

MIEAust, CPEng

Mob: +61 488770655 <tel:%2B61%20488770655>

Email; Mattsquair_at_xxxxxx

Web: http://criticaluncertainties.com

On 18 Apr 2016, at 10:13 PM, Peter Bernard Ladkin <ladkin_at_xxxxxx

On 2016-04-18 14:03 , Matthew Squair wrote:

But I'd personally be comfortable after a couple of months of realistic road trials.

Hey, folks, we gotta volunteer!......... How you gonna line all those companies up, Matthew? :-)

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319 <tel:%2B49%20%280%29521%20880%207319> www.rvs.uni-bielefeld.de



The System Safety Mailing List
systemsafety_at_xxxxxx  

#
" Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles, être soumis aux règlementations relatives au contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont pas destinés, nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le contenu. Toute exportation ou réexportation non autorisée est interdite.Si ce message vous a été transmis par erreur, merci d'en informer l'expéditeur et de supprimer immédiatement de votre système informatique ce courriel ainsi que tous les documents qui y sont attachés."



" This e-mail and any attached documents may contain confidential or proprietary information and may be subject to export control laws and regulations. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. Unauthorized export or re-export is prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." #  

 <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> Image supprimée par l'expéditeur.

Virus-free. <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> www.avast.com  

#
" Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles, être soumis aux règlementations relatives au contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont pas destinés, nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le contenu. Toute exportation ou réexportation non autorisée est interdite.Si ce message vous a été transmis par erreur, merci d'en informer l'expéditeur et de supprimer immédiatement de votre système informatique ce courriel ainsi que tous les documents qui y sont attachés."



" This e-mail and any attached documents may contain confidential or proprietary information and may be subject to export control laws and regulations. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. Unauthorized export or re-export is prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." #  

 <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> Image supprimée par l'expéditeur.

Virus-free. <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=emailclient> www.avast.com  

#
" Ce courriel et les documents qui lui sont joints peuvent contenir des informations confidentielles, être soumis aux règlementations relatives au contrôle des exportations ou ayant un caractère privé. S'ils ne vous sont pas destinés, nous vous signalons qu'il est strictement interdit de les divulguer, de les reproduire ou d'en utiliser de quelque manière que ce soit le contenu. Toute exportation ou réexportation non autorisée est interdite.Si ce message vous a été transmis par erreur, merci d'en informer l'expéditeur et de supprimer immédiatement de votre système informatique ce courriel ainsi que tous les documents qui y sont attachés."



" This e-mail and any attached documents may contain confidential or proprietary information and may be subject to export control laws and regulations. If you are not the intended recipient, you are notified that any dissemination, copying of this e-mail and any attachments thereto or use of their contents by any means whatsoever is strictly prohibited. Unauthorized export or re-export is prohibited. If you have received this e-mail in error, please advise the sender immediately and delete this e-mail and all attached documents from your computer system." #
---
This email has been checked for viruses by Avast antivirus software.
https://www.avast.com/antivirus




_______________________________________________ The System Safety Mailing List systemsafety_at_xxxxxx
Received on Fri Apr 22 2016 - 11:50:16 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:08 CEST