## IHI revisited with respect to FMEA

August 20, 2018

Some years ago, I suggested that the FMEA tool in use at the Institute for Healthcare Improvement was not very good. So I went back to see if anything has changed. The answer is no.

Somewhere on the IHI site, you can find (you probably have to be logged in) a success story: East Alabama Medical Center Opelika, Alabama, USA. If you click on this link you will see that the “RPN” for a chemotherapy medication process has been greatly reduced. Here’s the problem:

To review: in a FMEA process, one lists potential failure modes as:

Failure Mode – a description of the failure
Cause – a description of the cause
Effects – a description of the effect

For each item, one lists the following in a traditional FMEA

Probability – the likelihood that the failure will occur
Severity – the consequences should the failure occur

Probability and severity are each given numerical values on a scale of 1-10 (typically) and multiplied together to get a risk. All of this is explained in the ISO document on risk management for medical devices ISO 14971.

There are certain features of FMEA which readily become apparent. The failure modes with the highest severity usually have the lowest probability of occurrence (10×1) = 10. If one concentrates on the total of these high severity low probability items, one sees that there is no way to reduce the number – severity will always be at 10 and probability is already at 1.

IHI introduces a third term, R the likelihood that the failure will not be detected and gives this a number on the 1-10 scale so the “RPN” number is the multiplication of three values. This is totally bogus because detection is already contained in likelihood of occurrence. But with the IHI scheme, one can now get a reduction in R and hence a reduction in the total value and claim that the process has been improved.

## A bone to pick with AACC

August 17, 2018

So I registered and was at the AACC meeting in Chicago, but I couldn’t make all of the scientific sessions of interest to me.

1. Some of the sessions were stated as “not yet available.” While perhaps understandable before the meeting, they are still listed as not yet available, 2 weeks after the meeting ended.

AACC needs to improve the quality of their meeting.

## Who performed your test?

August 15, 2018

The conventional wisdom is that if you require some medical procedure based on the result of a medical test, before submitting to that procedure, you should have the test repeated.

Good advice, but more advice needs to be added. You should have the test repeated by a different method. In my book, I describe a case where due to suspected cancer from an elevated hCG result, the hCG assay was repeated 45 times while unnecessary treatment including surgery was performed. It wasn’t until the assay was repeated on a different method that in fact the hCG result was found to be normal – the woman never had cancer.

But my lab report that I view online, while having graphs of previous results and inclusion of expected normal ranges, does not provide any information as to what method or manufacturer was used to perform the test. I have seen a lab report from Europe where the manufacturer is listed. This information should be on lab reports.

## Reliability and six sigma

August 4, 2018

The essence of this article is that by measuring a long term sigma value, one will know the reliability of results, where reliability is equated to the number of defects. Defects are defined as results that are outside of the performance goals.

To recall: six sigma = (allowable total error – bias)/CV. High six sigma numbers are good, low numbers not so good.

The problem with this article is that it bases everything on Normal distribution statistics. Now this may make sense if you are measuring rulers sold by Home Depot but it doesn’t work for blood tests.

Consider a glucose meter. Unlike the Home Depot ruler, there’s a lot more going on in a drop of blood. There are thousands of compounds that can interfere. Say that one does, and that the meter reads 340 and truth is 40 mg/dL. Assume that the CV at 40 is 3%. The value at 340 is 250 standard deviations away! I challenge anyone to try to calculate the probability of such an event. There’s not enough zeros on the planet. Thus, the 340 value, which can happen, is not part of the measuring error of the usual process.

So any attempt to judge the number of defects by a six sigma calculation will miss the really big errors. And these are the errors that cause harm to patients.

An additional problem is attaching significance to the numerical six sigma results. Now this may sound like heresy but here’s an example.

Say you were comparing the Roche glucose meter a few years ago (yes the one with the maltose interference problem) with some other meters. The Roche meter would have probably had a high six sigma value and thus looked good. Obviously, this would have been a bad choice.

But what about in general? Consider what a lower six sigma number means. Yes there will be more values beyond the performance limits but these values will be a few standard deviations away and close to the performance limits. Unfortunately, six sigma values provide no information about large errors.

Sorry, but to evaluate the possibility of large errors caused by interferences requires extensive interferences studies or alternatively huge patient correlations (the kind that no one does).