IQCP – waste of time? No surprise

July 30, 2016

DSC01569edp

Having looked at a blog entry by the Westgards, which is always interesting, here are my thoughts.

Regarding IQCP, they say it’s mostly been a “waste of time”, an exercise of paperwork to justify current practices, with very little change occurring in QC practices.

This is no surprise to me – here’s why.

There are two ways to reduce errors.

FMEA (or similar programs) reduces the likelihood of rare but severe errors.

FRACAS (or similar programs) reduces the error rate of actual errors, some of which may be severe.

Here are the challenges with FMEA

  1. It takes time and personnel. There’s no way around this. If sufficient time is not provided with all of the relevant personnel present, the results will suffer. When the Joint Commission required every hospital to perform at least one FMEA per year, people complained that performing a FMEA took too much time.
  2. Management must be committed. (I was asked to facilitate a FMEA for a company – the meetings were scheduled during lunch. I asked why and was told they had more important things to do). Management wasn’t committed. The only reason this group was doing the FMEA was to satisfy a requirement.
  3. FMEA requires a facilitator. The purpose of FMEA is to challenge the ways things are done. Often, this means challenging people in the room (e.g., who have put systems in place or manage the ways things are done). This can create an adversarial situation where subordinates will not speak up. Without a good facilitator, results will suffer.
  4. The guidance to perform a FMEA (such as EP23) is not very good. Example: Failure mode is a short sample. The mitigation is to have someone examine each tube to ensure the sample volume is adequate. The group moves on to the next failure mode. The problem is that the mitigation is not new – it’s existing laboratory practice. Thus, as the Westgards say – all that has happened is the existing process has been documented. That is not FMEA. (A FMEA would enumerate the many ways that someone examining each sample could fail to detect the short sample).
  5. Pareto charts are absent in the guidance. But real FMEAs require Pareto charts.
  6. I have seen reports where people say their error rate has been reduced after they conducted a FMEA. But there are no error rates in a FMEA (errors rates are in a FRACAS). So this means no FMEA was carried out.
  7. And how anyone could say they have conducted a FMEA and conclude that it is ok to run QC monthly.

Here are the challenges with FRACAS

  1. FRACAS requires a process where errors are counted in a structured way (severity and frequency) and reports issued on a periodic basis. This requires knowledge and commitment.
  2. FRACAS also requires periodic meetings to review errors whereby problems are assigned to corrective action teams. Again, this requires knowledge and commitment.
  3. Absence of a Pareto chart is a flag that something is missing (no severity classification, for example).
  4. People don’t like to see their error rates.
  5. FRACAS requires a realistic (error rate) goal.

There are FRACAS success stories:

Dr. Peter Pronovost performed a FRACAS type approach on placing central lines and dropped the infection rate from 10% to 0 by the use of checklists.

In the 70s, the use of a FRACAS type approach reduced the error rate in anesthesiology instruments.

And FMEA failures

A Mexican teenager came to the US for a heart lung transplant. The donated organs were not checked to see if they were the right type. The patient died.


MU vs TE vs EG

July 29, 2016

DSC01755edp

Picture is aerial view from a Cirrus of Foxwoods casino in CT

MU=measurement uncertainty TE=total error EG=error grid

Having looked at a blog entry by the Westgards, which is always interesting, here are my thoughts.

To recall, MU is a “bottoms-up” way to model error in a clinical chemistry assay (TE uses a “top down” model) and EG has no model at all.

MU is a bad idea for clinical chemistry – Here are the problems with MU:

  1. Unless things have changed, MU doesn’t allow for bias in it modeling process. If a bias is found, it must be eliminated. Yet in the real world, there are many uncorrected biases in assays (calibration bias, interferences).
  2. The modeling required by MU is not practical for a typical clinical chemistry lab. One can view the modeling as having two major components: the biological equations that govern the assay (e.g., Michaelis Menten kinetics) and the instrumentation (e.g., the properties of the syringe that picks up the sample). Whereas clinical chemists may know the biological equations, they won’t have access to the manufacturer’s instrumentation data.
  3. The math required to perform the analysis is extremely complicated.
  4. Some of the errors that occur cannot be modeled (e.g., user errors, manufacturing mistakes, software errors).
  5. The MU result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
  6. So some people get the SD for a bunch of controls and call this MU – a joke.

TE has been much more useful than MU, but still has problems:

  1. The Westgard model for TE doesn’t account for some important errors, such as patient interferences.
  2. Other errors that occur (e.g., user errors, manufacturing mistakes, software errors) may be captured by TE but the potential for these errors are often excluded from experiments (e.g., users in these experiments are often more highly trained than typical users).
  3. Although both MU and TE rely on experimental data, TE relies solely on an experiment (method comparison or quality control). There are likely to be biases in the experiment which will cause TE to be underestimated. (See #2).
  4. The TE result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
  5. TE is often overstated e.g., the sigma value is said to provide a specific (numeric) quality for patient results. But this is untrue since TE underestimates the true total error.
  6. TE fails to account for the importance of bias. That is, one can have results that are within TE goals but can still cause harm due to bias. Klee has shown this as well as me. For example, bias for a glucose meter can cause diabetic complications but still be within TE goals.

I favor error grids.

EG

  1. Error grids still have the problem that they rely on experimental data and hence there may be bias in the studies.
  2. But 100% of the results are accounted for.
  3. There is the notion of increasing patient harm in EG. With either MU or TE, there is only the concept of harm vs no harm. This is not the real world. A glucose meter result of 95 mg/dL (truth=160 mg/dL) has much less harm than a glucose meter result of 350 mg/dl (truth=45 mg/dL).
  4. EG simply plots test vs. reference. There are no models (but there is no way to tell the origin of the error source).

What needs to be measured to ensure the clinical usefulness of an assay

July 19, 2016

DSC_1651edp

I was happy to see an editorial which IMHO states the required error components that need to be understood to ensure the clinical usefulness of an assay. Of course bias and imprecision are mentioned. But in addition, the author mentions freedom from interferences and pre and post analytical errors.

One can ask don’t interferences and pre and post analytical errors cause bias? Since the answer is yes, then why do these terms need to be mentioned if it was already stated that bias is to be measured. The reason is the way bias is measured in many cases will fail to detect the biases from interferences and pre and post analytical errors.

For example, if regression is used, average bias will be estimated, not the individual biases that can occur from interferences.

If σ is estimated, this usually involves bias measured from either regression or from quality control samples so again interference biases don’t get counted.

Finally, most of the studies are done in ways in which pre and post analytical errors have been minimized – the studies are performed outside of the routine way of processing patient samples. Hence, to ensure the clinical usefulness of an assay, one must construct protocols that measure all of the error components mentioned in the first paragraph.


No surprise that Instructions For Use (package inserts) are weak

July 16, 2016

 

DSC00565edp

A recent letter in Clinical Chemistry (subscription required) talks about package inserts from manufacturers (also called instructions for use (IFU). The letter says that manufacturers’ IFUs often do not follow CLSI guidelines with respect to hemoglobin interference.

This should come as no surprise – here’s why.

The authors cite FDA regulations which state: “Limitation of the procedure: Include a statement of limitations of the procedure. State known extrinsic factors or interfering substances affecting results.”

This regulation leaves a lot of leeway as to what should appear in the IFU.

So the authors say that CLSI guidelines (C56 and EP7) are not followed. One should understand that CLSI guidelines are not regulations. No manufacturer has to follow them. Moreover, these guidelines are often manufacturer friendly as manufacturers dominate the committees who prepare the documents. For example, the authors cite C56 which has an example for how to report when there is no hemoglobin interference for glucose. The table contains the concentration of hemoglobin tested, two glucose levels, and bias < 10%.

This is messed up! If bias found were 9%, this CLSI guideline is suggesting that it is ok to say there was no bias!

So even if manufacturers followed CLSI guidelines, maybe this wouldn’t be so good.

To understand why a CLSI document would permit the claim “no bias” when 9% bias was found…

CLSI prides itself on equal influence of “professions” (e.g., clinical chemists in hospitals), “government” (e.g., FDA), and “manufacturers” (people in industry). But the industry people are largely from regulatory affairs and their role on committees has often been an obstructionist role. Basically, the industry – like industries in other fields – does not want to be regulated at all, so if there has to be a standard, the regulatory people try to make it as industry friendly as possible.

As an example of the obstructionist role, consider EP7. It was initially published as a “P” (proposed) version in 1986. Only “A” (accepted) versions are accepted by the FDA. So how long did it take for this standard to go from P to A: 16 years! (initially published in 2002.) It wasn’t until I was the chair of the Evaluation Protocol Committee that this project got moving faster than a snail’s pace and was finished.

And then there was the CLS standard EP11 – Uniformity of Claims. It was intended to be a guideline for IFUs. It’s hard to say if this standard would help since it could also be ignored. It was published as a “P” document in 1996. CLSI management (who was pressured by industry) pressured me to cancel it – I didn’t but they did and it was not advanced and is no longer available.

Finally, I can’t speak about other companies, but in the company that I worked for, IFUs were prepared by the marketing department.