Normalization of deviance

December 29, 2015

DSC00303edpRecently, I became aware of an analysis of the Gulfstream IV crash last year in Bedford, MA (KBED), an airport where I train.

To recall, the crew attempted to takeoff with the gust lock* engaged – the plane was never airborne and crashed after overrunning the runway. All aboard died.

*gust lock is a device which prevents the control surfaces from moving to protect them on the ground from wind gusts. It is supposed to be disengaged before flight.

Besides not disengaging the gust lock, the pilots failed to perform a flight control check (that verifies that controls move in all directions that they should). This check is standard for any plane and if performed would have alerted the crew to the problem.

But the astounding revelation in the NTSB report is that the flight crew almost never performed checklists: “A review of data from the airplane’s quick access recorder revealed that the pilots had neglected to perform complete flight control checks before 98% of their previous 175 takeoffs in the airplane, indicating that this oversight was habitual and not an anomaly.”

This has been referred to as normalization of deviance and is explained here. That is, deviant behavior is so commonplace that it is no longer considered deviant. And yes, it happens in healthcare too.

For those who want more details, the NTSB report is here.

 

 


A way to improve glucose meter error grids using Taguchi principles

December 5, 2015

KEWB2So I was reading an article about glucose meter performance and I came across the MARD (mean absolute relative difference) statistic. I have seen this before – it is used for glucose meters and almost nowhere else. What bothered me was that the paper used MARD as a summary statement about the performance for different meters – the problem is MARD has so many problems I wrote a paper critiquing MARD and submitted it. As soon as I clicked the submit button, I realized I had left out an important element; namely why were people using MARD?

In any case, I got mixed reviews about my paper and the editor said I could try to submit a revision. But the more I thought about it, I realized that my paper was not that good. I was going to drop it when it occurred to me, rather than complain about MARD, I might be able to come up with a better statistic. After all, people use MARD because they want to differentiate meters that appear to have similar performance when analyzed with error grids.

So I wrote a new paper which provides an alternative to MARD. It has been accepted and will appear shortly.