Re: [SystemSafety] systemsafety Digest, Vol 44, Issue 26

From: Matthew Squair < >
Date: Sat, 19 Mar 2016 11:01:37 +1100


The other problem we face is that due to the usually small sample size of tests of 'process x' or 'tool y'. you really can't trust the good results reported. Like match tables for schools, the small schools always figure in the top, and bottom, of the ranks. That's leaving aside methodolgy problems. A Cochrane style meta study would be useful.

Matthew Squair

MIEAust, CPEng
Mob: +61 488770655
Email; Mattsquair_at_xxxxxx
Web: http://criticaluncertainties.com

> On 19 Mar 2016, at 3:45 AM, Roderick Chapman <roderick.chapman_at_xxxxxx >

>> On 18/03/2016 11:00, systemsafety-request_at_xxxxxx
>> The only issue with this approach is the cost and the technical
>> complexity. A very few organisations are ready to pay that cost.

> David,
> I'm not sure what you mean by those points.
>
> What cost? Of tools or training? Do you have any data
> to support that? What reduction in defect density (and thus
> downstream cost saving in other verification activities and re-work)
> is required to justify adoption of a more formal approach?
>
> My experience is that almost no organisations have such data,
> and therefore can't just justify the RoI argument to change their
> ways. The default behaviour is "do the same as last time, but promise
> to be more careful" ... even if "last time" was a horrible mess.
>
> What's "technical complexity" mean? Of what?
>
> I imagine Frama-C suffers from all the same adoption hurdles as
> SPARK, possibly worse in some areas. For example - Frama-C requires
> _more_ user-added contracts than SPARK to make up for the deficiencies
> in C's underlying type system.
>
> - Rod
>
> _______________________________________________
> The System Safety Mailing List
> systemsafety_at_xxxxxx


The System Safety Mailing List
systemsafety_at_xxxxxx Received on Sat Mar 19 2016 - 01:01:54 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:08 CEST