Re: [SystemSafety] Scalability of Formal Methods [was: Logic]

From: Martyn Thomas < >
Date: Thu, 27 Feb 2014 10:27:31 +0000


On 27/02/2014 02:00, Patrick Graydon wrote:
> I think showing that the software is correct according to some abstract semantics is a useful goal Ö provided that somebody covers the other gaps. This begins with first being clear that those gaps exist. Since I have met several people who seemed to think that software proof was going to tell them what the real chips would do in all circumstances, I think a little reminder of what it does mean is useful from time to time.

Patrick

I agree with all this (except that I have never met anyone who "seemed to think that software proof was going to tell them what the real chips would do in all circumstances ". Who are these people who keep appearing anecdotally on mailing lists like this? Can we name them and see if we can stop them clouding the issues with such nonsense?

I understand you to be saying that all the gaps in assurance need to be identified and filled as far as possible, and that the amount of assurance that has been achieved for each gap should be clearly stated. I agree completely. But I'm intrigued by your statement

"But safety depends on knowing what will happen in the real world on real chips. And if you canít prove that, then you need to bring in a different form of evidence to complete the picture. Some sort of inference based on observation. Testing, in other words."

I believe that testing a system is essential. But I'd like to understand what you think that passing a set of tests has told you about the safety of a system. To put it another way: what is it that you want to find out, and that can be discovered by constructing and running suitable tests? I ask this with a genuine desire to explore and learn.

Regards

Martyn



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Thu Feb 27 2014 - 11:27:46 CET

This archive was generated by hypermail 2.3.0 : Fri Apr 26 2019 - 05:17:06 CEST