Re: [SystemSafety] Static Analysis

From: Patrick Graydon < >
Date: Wed, 26 Feb 2014 10:32:57 +0100

I appreciate that some simple code guidelines* might have made this error less likely and that some static analyses might have spotted it. I also appreciate the benefits that static analysis has over testing. But I think it is of at least equal or greater concern that this was not caught in testing. This is not the sort of defect that is triggered only by an obscure or unlikely combination of inputs: the relevant partition of the input space is exactly one that should appear in a requirements statement for the functionality in question. Any test case where everything checks out except that the signature doesn’t match should have shown this up (barring a deliberately or accidentally flawed oracle or forged test report etc.). Moreover, while (non-exhaustive) testing can’t reveal all bugs, any unit test plan that achieved coverage of all software security requirements and statement coverage of the code must find this one (again barring a deliberately or accidentally incorrect oracle, forged test report, etc.): there is no way to get coverage of lines 63–76 without a test case that should show this up. Condition and MC/DC coverage should likewise have shown this up.

This suggests various possibilities:

1)  Apple doesn’t routinely define software security requirements (SSRs) and trace these to test cases etc.
2)  Apple’s testing of critical security functionality achieves grossly inadequate requirements coverage and/or structural coverage
3)  A single individual could either accidentally or deliberately introduce both the bug and a corresponding defect in the test oracle (or test report, etc.)
4)  The bug was an accident but a single individual tampered with the testing to hide it
5)  Apple defines SSRs, traces these to test plans, tests to appropriate coverage levels, etc., but no one defined a security requirement for this functionality
6)  Apple defines SSRs, traces these to test plans, tests to appropriate coverage levels, etc., but there were simultaneous accidental defects introduced in both the code and test plan by two different people

(1), (2), and (3) seem like violations of any notion of reasonable practice commensurate with the consequences of the bug, but we are living in a world where software makers routinely declaim fitness for any purpose whatsoever. (4) is a good reason for having tests replicated by an independent party (which the Common Criteria call for at high EALs). I wonder why the US DoD didn’t insist on an independent testing of security-critical stuff like this (or if they did, how they missed it). (5) would seem to suggest an inadequate security analysis practice, but I am hardly an expert on those.   

I’m a bit skeptical of the claim of the ‘former Apple employee who worked on Mac OSX’ that ‘Apple will be able to identify who did the code checkin which created the bug’. I’ve never worked for any part of Apple, but in my experience in commercial software development, developers’ login credentials aren’t very well guarded (e.g., I’ve seen passwords written under keyboards). Moreover, I’ve worked at a several companies where senior developers have root access to the machines hosting the CM archives. Unless someone at Apple was thinking very carefully about insider threats, a malicious actor might have covered his or her tracks, e.g. by signing into SVN as a colleague, using a colleague’s unlocked console while they were at the toilet, altering the log files, etc.

— Patrick

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Wed Feb 26 2014 - 10:33:15 CET

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:06 CEST