Do it right the first time – not always the best strategy

December 14, 2017

Watching a remarkable video about wing suit flyers jumping into an open door of descending plane, it appears that they had tried to accomplish this feat 100 times before having success.

On page four of a document that summarizes the quality gurus: Crosby, Deming and Juran, Crosby’s “Do it right the first time” appears. Clearly, this would have been a problem for the wing suit flyers. Crosby’s suggestion is appropriate if the state of knowledge is high. For the wing suit flyers, there were many unknowns, hence the state of knowledge was low. When the state of knowledge is meager, as it was at Ciba Corning when we were designing in vitro diagnostic instruments, we used the test analyze and fix strategy (TAAF) as part of reliability growth management and FRACAS. This sounds like the opposite of a sane quality strategy but in fact was the fastest way to achieve reliability goals for our instruments.

Advertisements

Risk based SQC – What does it really mean

December 4, 2017

Having just read a paper on risk based SQC, here are my thoughts…

CLSI has recently adopted a risk management theme for some of their standards. The fact that Westgard has jumped on the risk management bandwagon is as we say in Boston, wicked smaaht.

But what does this really mean and is it useful?

SQC as described in the Westgard paper is performed to prevent patient results from exceeding an allowable total error (TEa). To recall, TEa = |bias|/SD*1.65. I have previously commented that this model does not account for all error sources, especially for QC samples. But for the moment, let’s assume that the only error sources are average bias and imprecision. The remaining problem with TEa is that it is always given as a percentage of results, usually 95%. So if some SQC procedure were to just meet their quality requirement, up to 5% of patient results could exceed their TEa and potentially cause medical errors. This is 1 in every 20 results! I don’t see how this is a good thing even if one were to use a 99% TEa.

The problem is one of “spin.” SQC, while valuable, does not guarantee the quality of patient results. The laboratory testing process is like a factory process and with any such process, to be useful it must be in control (meaning in statistical quality control). Thus, SQC helps to guard against an out of control process. To be fair, if the process were out of control, patient sample results might exceed TEa.

The actual risk of medical errors due to lab error is a function not only of an out of control process but also due to all other error sources not accounted for by QC, such as user errors with patient samples (as opposed to QC samples), patient interferences, and so on. Hence, to say that risk based SQC can address the quality of patient results is “spin.” SQC is a process control tool – nothing more and nothing less.

And the best way of running SQC would be for a manufacturer to assess results from all laboratories.

Now some people might think, this is a nit-piking post but here is an additional point. One might be lulled into thinking that with this risk based SQC that labs don’t have to worry about bad results. But interferences can cause large errors that can cause medical errors. For example, in the maltose problem for glucose meters, 6 of 13 deaths occurred after an FDA warning. And recently, there have been concerns about biotin interference in immunoassays. So it’s not good to oversell SQC, since people might loose focus on other, important issues.