Re: [SystemSafety] Scalability of Formal Methods [was: Logic]

From: Les Chambers < >
Date: Fri, 28 Feb 2014 10:49:42 +1000


All righteous words. Couldn't agree more. ... Then you look at what people actually do in the real world and the weeping continues.
I am currently writing up a case study on healthcare.gov, the Obamacare web-based transaction processing system that failed comprehensively last October. It's an excellent vehicle for teaching software engineering because everything they did was the opposite of software engineering. For your collective entertainment here is an excerpt from testimony by a senior vice president of CGI, the prime contractor:
(trick question: does anyone see anything wrong with this statement?)

"Unfortunately, in systems this complex with so many concurrent users, it is not unusual to discover problems that need to be addressed once the software goes into a live production environment. This is true regardless of the level of formal end-to-end performance testing -- no amount of testing within reasonable time limits can adequately replicate a live environment of this nature. This perspective informs our remarks about what happened on October 1 and in the days that followed. Upon go-live on October 1, the emphasis shifted from software development to optimizing for FFM performance and rapidly addressing issues identified through user transactions. The first set of issues for users dealt with the enterprise identity management (or EIDM) function provided by another contractor, which allows users to create secure accounts. The EIDM serves as the "front door" to the Federal Exchange that a user must pass through before entering the FFM. Unfortunately, the EIDM created a bottleneck that prevented the vast majority of users from accessing the FFM."

Source:
CGI Federal Inc., Testimony of Cheryl Campbell Senior Vice President CGI Federal Inc. Prepared for The House Committee on Energy and Commerce http://www.projectauditors.com/Papers/Troubled_Projects/HHRG-113-IF00-Wstate -CambellC-20131024.pdf

-----Original Message-----
From: systemsafety-bounces_at_xxxxxx [mailto:systemsafety-bounces_at_xxxxxx Martyn Thomas
Sent: Friday, February 28, 2014 2:55 AM
To: systemsafety_at_xxxxxx Subject: Re: [SystemSafety] Scalability of Formal Methods [was: Logic]

I agree with David.

In my opinion, an engineer wants their system to be fit for purpose and chooses methods, tools and components that are expected to achieve fitness for purpose. It's poor engineering to have a system fail in testing, partly because that puts the budget and schedule at risk but mainly because it reveals that the chosen methods, tools or components have not delivered a system of the required quality, and that raises questions about the quality of the development processes.

Finding a fault in testing also strongly suggests the presence of several other faults that you have not found, because of the well-known limitations of testing.

So, as David says, the testing should be designed to disprove the claim that the system is fit for purpose. And if the tests find a fault in the system, they have also found a fault in the chosen methods, tools or components and that is where the correction should be made first. If you just fix the fault without fixing the process that should have prevented the error getting through to testing, then you may compromise the design or introduce further errors and you have missed a chance to make important improvements to your engineering.

When after trying your hardest, you can't find any faults, it may mean that your tests are not good enough, but it should also increase your justified confidence in your methods, tools and components and, ultimately, that will be the root of any justifiable confidence in the system.

And, of course, your development process has improved - so the next system will be quicker, cheaper and better!

Martyn

On 27/02/2014 15:39, David Crocker wrote:
> I've always considered that the role of testing is fundamentally
> different when software is developed using a correct-by-construction
> approach that formally proves that the code satisfies the software
> requirements.
>
> This approach uses a chain of tools (formal verification tools,
> compiler, linker, etc.) and hardware, each of which be believe produces
> output that has a defined relationship to the input. For example, the
> formal tool provides proof that the source code meets the requirements;
> the compiler produces object code that faithfully implements the
> semantics of the source code in the target instruction set; and the
> hardware faithfully implements that instruction set. We further believe
> that we have combined these tools and hardware in a process such that if
> they all function correctly, the target system must meet the requirements.
>
> The purpose of testing is to challenge those beliefs and ultimately
> establish confidence in them. If even one test fails, then there is a
> problem in one of the tools, or the hardware, or the process. So when a
> test fails, we don't just treat it as a bug that needs fixing, we must
> establish which tool or hardware or part of the process is at fault, and
> correct it so that it can't happen again.
>
> Of course, in practice some sorts of test may also expose deficiencies
> in the original specification.
>
> David Crocker, Escher Technologies Ltd.
> http://www.eschertech.com
> Tel. +44 (0)20 8144 3265 or +44 (0)7977 211486
>



The System Safety Mailing List
systemsafety_at_xxxxxx

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Fri Feb 28 2014 - 01:49:53 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:06 CEST