Re: [SystemSafety] Fwd: Contextualizing & Confirmation Bias

From: Peter Bernard Ladkin < >
Date: Wed, 05 Feb 2014 20:48:47 +0100


I worried when I replied to Derek that I was opening a can of worms. So let me first avoid answering Derek's and Nick's questions and lay some groundwork. Under the caveat that I am not sure how much of this applies to system safety, but under the conviction (bias?) that a lot less of it applies than people have suggested.

So, let me apologise in advance to those who think this is peripheral. Since it has been cited as a reason why "safety cases" in the broad sense are to be trusted less, I do think that some discussion is not untoward.

The term "confirmation bias" originally refers to the following phenomenon. (1) You believe A. (2) You act as if A. (3) You experience the world as if A is true. (4) You end up believing A more than you did at the beginning simply because of that.

Now, that's a fairly well-circumscribed phenomenon, and it is a lot more limited than what has been proposed here. It doesn't apply to safety cases in the way that has been suggested, indeed part of it seems to be an entirely desirable phenomenon: if we experience the world as if (3) the safety case is true, then we have achieved the goal the system development was intended to achieve! (4) is utterly unimportant! So even the question of confirmation bias just doesn't seem to arise.

The vicar lives just up the hill from me. The church is just downhill from me. Just before the church, there is a main street, in the form of a big curve, which is sparsely but regularly used, with a 30kph limit with which few drivers conform. Sunday service starts at 10am. Often, he comes barrelling down the road on his kid's mountain bike at 2 minutes to 10 - the sermon has taken just a little longer to finish as he'd hoped. I know my bicycles - his brakes aren't that good. This is the only time - I swear - that I pray, briefly. For I am not a Christian. But I like him a lot and he does wonderful work.

He's still alive; he still does it. I mention it to him every so often.

The point is that there has never been a car exactly there, going too fast, on a Sunday morning at exactly the wrong time. But I think there could be, so I call his behavior confirmation bias.

Now, for all I know, he might think he has "higher protection". If so, he might call the results "confirmation". I call them "confirmation bias" because I don't think he does. Whether you judge it to be a bias or not depends on the assumptions you make about the world.

Millikan was acting as if it were true that there is just one number (in suitable units) which is the charge on the electron, and he was out to find it. An appropriate charge of "confirmation bias" would attach to the phenomenon that his results suggest there is indeed just one number/unit which is The Charge Of The Electron. But I don't call it a bias, and neither should Derek or Nick, because we all believe it (I take it). And that is not what either Derek or Nick were suggesting was improper.

Similarly, consider my continually recalculating "2+2". I mostly get the answer 4, and when I don't, I look for what got in the way. That is not confirmation bias. 2+2 is indeed 4; there are no two ways about it; it's not an assumption I make about the way the world might be. I can prove it from Peano arithmetic, and if you say (as some psychologists do), "oh, that's just because you're used to thinking that way", I would reply "anyone who doesn't get the answer 4 either doesn't understand what 2 is, or doesn't understand the operation +". Here, of course, we open the whole Kripke-on-Wittgenstein-on-Rules can of worms. So let me not.

In the specific explanation of the meaning of the term above, "confirmation bias" only applies to an argument if someone acts as if the argument were true. Which, when the argument is a safety case and the safety case has been accepted, people do. All people. But I propose it is only helpful to call it a bias if it doesn't accurately represent the world as it is (yes, lots of metaphysical assumptions here). Indeed, there is a considerable worry that most safety cases do not present a very accurate representation of the world as it is, especially when we are proposing that events round about the systems have objective frequencies. And when we accept one for reasons peripheral to its own partial validity, then it is appropriate to call it a bias. And when we accept it because, well, that's what's down on paper, and there's a lot of it, and boy that was a huge amount of work by dedicated and honest people, rather than on its objective merit of establishing what it claims, then that is a bias worthy of a name. I wouldn't pick the name "confirmation bias" because that's already taken, but I can understand the temptation.

What Nancy was talking about is the following (possible) phenomenon. Suppose someone presents you with an argument to a conclusion. The argument is not a rigorous deduction; and it makes lots of assumptions, implicit and explicit. Do you believe it, or are you sceptical? Well, when that argument is hundreds of pages of all kinds of varied considerations long, and you don't understand half of it because you are not as expert in the math as those who wrote it, and the conclusion is that "this system is acceptably safe in operation", then belief seems to be close to a matter of faith. This is especially so, since argument critics are almost inevitably right. But some systems nevertheless do turn out to be acceptably safe in operation, so the sceptical inference from "this argument for acceptably-safe operation is full of holes" to "this system is not acceptably safe in operation" is not invariably correct; indeed is often wrong.

The phenomenon to which Nancy was drawing attention is that, in a safety-case regime, the conclusion "this system is acceptably safe in operation" is somehow more likely to be erroneously drawn than in a regime where specific prophylactic measures are applied during development and ticked off as fulfilled. In the US. In the oil industry.

Well, OK, maybe that's so. But I am not really OK with using the term "confirmation bias" to suggest a reason for that. I'm a "confirmation bias fundamentalist". I think you should use the term in its original use.

On 2/5/14 6:40 PM, Derek M Jones wrote:>
> So its not confirmation bias if, sometime after doing the
> analysis, you are shown to be correct?

I didn't say that. But I would say that an attribution of confirmation bias is much harder to sustain if you succeeded in reaching the truth. Indeed, I'd say that an attribution of any bias at all is hard to sustain if you succeed in reaching the truth, except if it's due to happenstance.

> I have always regarded confirmation bias as not depending on the answer
> given and actual answer being correlated in any way.

Sure it is.

A die is an instrument with a purpose: when you roll it "randomly", then any face is as likely as any other to be shown. When one face is more likely that some other, we call the die "biased". The vicar is not suffering from confirmation bias if indeed he is "protected by higher agencies". He is just acting according to the way the world is. Whereas, if there is no higher agency protecting him as he approaches the junction, then his actions can be said to be partially due to "confirmation bias".

The STS people want a term for a phenomenon independent of the way the world "actually is". I am sceptical whether one can even reasonably individuate the phenomenon when one makes no assumptions about the way the world really is.

On 2/5/14 6:45 PM, Nick Lusty wrote:
> Whilst accepting that all science is done in its own context, surely getting "the right answer ...
> when there was a right answer to get" is a not evidence of the absence of confirmation bias in the
> experiment.

Neither is it evidence of the absence of elephants in the room. I have no idea whether Millikan's results suffered in some way from confirmation bias. I am only suggesting that the evidence we have does not indicate that they did.

PBL Prof. Peter Bernard Ladkin, Faculty of Technology, University of Bielefeld, 33594 Bielefeld, Germany Tel+msg +49 (0)521 880 7319 www.rvs.uni-bielefeld.de



The System Safety Mailing List
systemsafety_at_xxxxxx Received on Wed Feb 05 2014 - 20:48:58 CET

This archive was generated by hypermail 2.3.0 : Thu Apr 18 2019 - 12:17:06 CEST