Re: [SystemSafety] Security assurance [was: Analyses of root causes.]

From: Peter Bernard Ladkin < >
Date: Mon, 29 Jun 2015 10:28:26 +0200

Hash: SHA256

On 2015-06-27 12:53 , Martyn Thomas wrote:
> ..... what I'd really like to find are papers that go into details at the level of "buffer
> overflow", "untrapped exception", "SQL injection" etc - where the failure to do acceptable
> software engineering is evident.

Two weeks ago, I attended a one-day conference in Krefeld of researchers and companies working in computer security in my state of North-Rhine-Westfalia (NRW, one of the largest states in Germany with about 17m people, including the Ruhr industrial conurbation, as well as Cologne and the former capital Bonn).

There were a few academics as well as the NRW Data Protection Officer talking about the latest threats, and companies talking about their future plans and products in computer security.

Wincor Nixdorf is one of the largest suppliers of ATMs, headquarted in Paderborn, some 50km to the south of Bielefeld. The lecturer from Wincor Nixdorf described the complicated electronic transactions that go on nowadays: ATMs as well as individual computers and point-of-sale card readers performing real-time transactions on demand-deposit accounts, and all the communications involved and (implicitly) the protocol execution and cross-checking and so on. Lots of boxes and arrows/edges on one slide! (On the other hand, it did all fit on one slide...) Too complicated, not fit for the future! So, what is the future? Wincor Nixdorf is working on .... wait for it ..... the Cloud! All those pesky arrows go away and a fluffy circle appears! What an improvement (at least for slide designers).

And this to security professionals. I suppressed my feeling insulted. Two second's thought says that if the future system is to enable all the transactions of the present system, then the transaction complexity cannot go down. I suspect what NW want to do is to virtualise the bank back-end systems, by defining some common interface for any demand-deposit real-time-transaction device, whether owned by the bank, or a retail business, or an individual bank customer. That would make sense. Unfortunately, there was no info about what that interface should look like, or even mention the word as far as I can remember. Maybe it's company secret. But I hope not for long if we the unwashed public such as myself are going to trust it. There needs to be a public specification and a lot of hammering on it from motivated nerds.

I was up next with my talk. Which was on the theme that, if you want to operate reasonably securely, you first need to understand your system and its behavior quite well.

As it happens, I do have such a tale about Wincor Nixdorf kit, although - I hastened to add - probably not a bit that was Wincor Nixdorf responsibility. I hadn't intended to talk about it. I improvised.

About a decade ago, there was an alert about Microsoft Outlook - a vulnerability in Mail allowed a remote communicator to assume local administrator privileges. A week or so later, I visited a branch of my bank, to see a Wincor Nixdorf ATM displaying a Windows login screen with a *Mail* icon. I couldn't believe it. Here was an ATM allowing an unlimited remote exploit by anyone with access to a part of the bank transaction network, using a piece of vulnerable SW, namely a mail program, that had no business being on an ATM anyway. I pointed out that it really doesn't matter how your architecture looks or how wonderful it is if that is the kind of ... um, functionality .... you are perpetrating in your implementation. I asked: before we go about introducing wonderful new functionality, are we really sure we understand the functionality we have at present? This incident showed: ten years ago, obviously not. Have things changed? (To say again: I have no evidence that Wincor Nixdorf was responsible for the SW on this machine; indeed, I suspect not. But it shows what can happen when you relinquish control over part of your kit. Your label's on the kit anyway......)

I could have stopped there. But I did want to show some of my slides.

Next up was a small firm, also from Paderborn, which shall remain nameless. One of their products is a separation kernel for a smartphone, which separates the operation of the user's personal SW from the operation of the user's company's SW. So you can use the same device for personal things and for confidential business operations.

Now, separation kernels are about three decades old. Have any of them been formally verified apart from (I seem to recall) John Rushby's for Unix from the early '80's? Probably some used by the US military, and maybe other secret machines. But imagine doing it for Android, which was what was being proposed. Suppose you perform a formal verification. Even supposing there is a specification of the OS, how are you going to keep it up to date when the OS (along with its specification, one hopes) changes every couple of weeks? I talked to the presenter afterwards: why are they doing this? (Because we think there is demand.) Are they using CbyC methods? (What are those?) If not, how do they hope to ensure the SW works as wished? (We'll take very great care over it.)

I said, look, if I am running a business with confidential information flow over the cell-phone network, I issue a locked-down company phone to all my employees with need. All of whom understand that the phone is to be used for company business exclusively. Even if I were to believe your SW is vulnerability-free, if it's going to be installed on employees' private phones, how do I ensure that your SW version is always kept up to date with the OS version installed (the usual version-control-inexpert-user issue)? (Well, it's an issue, but we think there's a market.) OK, so how will you ensure you're part of the security solution, rather than part of the security problem? (We're pretty sure we can get it right.)

OK. Twenty years ago, 80% of the vulnerabilities being published by CERT (back in the days when CERT did it rather than the Mitre CVE database) were buffer overflows in WWW SW. If all that SW had actually done what its designers and implementors were "pretty sure" that it should do, then all of those vulnerabilities would not have been there and security would have been much, much easier to assure. What has changed? We now have industrially-feasible methods of ensuring objective properties of SW. But few people are using them. They are apparently still relying on being "pretty sure" they can "get it right."

It follows that nothing much has changed. Such slick SW is still contributing to the problem because mostly its properties are not rigorously assured. Not yours, of course - I'm sure you'll "get it right", as you say, but I'd really recommend you use CbyC methods, and I have no idea how your customers are going to solve the version control issue but I'm sure you'll have some suggestions how they may do so. Time to catch my train back, nice talking to you.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Je suis Charlie
Tel+msg +49 (0)521 880 7319

-----BEGIN PGP SIGNATURE----- iQEcBAEBCAAGBQJVkQGqAAoJEIZIHiXiz9k+SKMIAIL+jCo7DRDlmkmaUZDcrgq/ uZkVbdnXb/jEIV1+UE62x9rP7EOIfX7epI/mtFoOGM8JaTTTULcztRtxubny2x5c YIU0dKaGemUqgx3ui0355oWPDtv9xB+Yejugyn7y+dMFnAvYX13KrGb82q5R1Ap3 pQBy/y19bGYVpoN62ooEzml6mzRTjBFeLDFTGygHgzEJaLFNxYTST0WOc5XXFeyZ O3bjYPdQCQaujjHdzTJZggOfC0iPakFTgvnsHtOpRz71zTJPvgzssjX/PfaOIiw2 g3Fxs7zBLa0EzarXxJ7Q2gzSN+W6u5zWV59bc85fHtVBW6JK/WsSva3Ngl/LHl4= =sXWn

The System Safety Mailing List
systemsafety_at_xxxxxx Received on Mon Jun 29 2015 - 10:28:33 CEST

This archive was generated by hypermail 2.3.0 : Tue Jun 04 2019 - 21:17:07 CEST