Re: [SystemSafety] OpenSSL Bug

From: Patrick Graydon < >
Date: Fri, 11 Apr 2014 17:36:50 +0200

On 11 Apr 2014, at 16:38, Mike Rothon <mike.rothon_at_xxxxxx

> 1) How did we arrive at a situation where a large proportion of seemingly mission / financially critical infrastructure relies on software whose licence clearly states " This software is provided by the openSSL project ``as is`` and any expressed or implied warranties, including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose are disclaimed."?

I donít know the history about which you ask. But it seems inevitable to me that gratis software would not be warranted fit for any purpose: how could a loose collection of unpaid volunteer developers possibly underwrite such a warranty?

I donít have too much of a problem with gratis software being offered as-is. I doubt most people are capable of judging the fitness of software. (Even if they are experts, how much can one person check?) But I donít see why such software shouldnít be sold by a vendor who charges for the value-add of verifying, validating, and warranting it.

The real questions, I think, are (a) why do we put up with such disclaimers on software that is part of a commercial offering?, and (b) what can be done to make vendors take responsibility for their software? I realise that each of us as individuals has Hobsonís choice with respect to (a), but if enough people demanded it, the situation might be different. Chrisís paper explores some of the options for addressing (b).

> 2) Is it implicit that FOSS is less secure than proprietary software because exploits can be found by both analysis and experimentation rather than just experimentation? Or will this start a gold rush analysis of FOSS by security organisations resulting in security levels that are close to or better than proprietary software?

There are people who claim the opposite actually: the thinking is that more eyeballs make software more secure. Iíve heard rhetoric from both sides, but if there is solid empirical evidence either way, I am not aware of it.

Weíve discussed programming languages, but what this episode makes me wonder more about is basic engineering in the form of architecture, verification, and validation.

Iíve read a couple of articles about this that mentioned the idea that this particular code wasnít considered critical because the heartbeat function has no particular security implications. (Sorry, the citations escape me at the moment.) That worries me because it displays a misunderstanding of partitioning and isolation, a topic that DO-178B addressed two decades ago.

I also wonder about how this code was tested before being put into service. Static analysis might have been a good idea, but shouldnít basic robustness testing as per DO-178B ß6.4.2.2ís two-decade-old advice have caught this? I suppose that one could submit a heartbeat length greater than the actual request data sent, get back a response longer than what was sent, and not think that this is a problem. But that seems a bit doubtful to me.

What kind of engineering did the people who developed this code and the people who put it into service do?!?

> Just in case anyone missed the news, the original source code for MS-DOS and Word for Windows 1.1a is available online from the Computer History Museum (

Might be worth revisiting the bad old days days of LPARAMs, HANDLEs, LocalAlloc, GetProcAddress, GetProfileString, InvalidateRect, CreateWindow, MessageBox, and thousands-of-lines-long wndprc functions. :)

ó Patrick

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Fri Apr 11 2014 - 17:37:09 CEST

This archive was generated by hypermail 2.3.0 : Sat Feb 23 2019 - 09:17:07 CET