The problem with patient based QC

August 18, 2016

CPPP.edp

In an editorial (actually more of a mini review) in Clinical Chemistry, the various patient based QC methods are reviewed. The editorial is provided because of a companion article that has yet to appear.

One problem with patient based QC is that it is always compared to traditional QC, perhaps with the goal that it could replace traditional QC. But why not do both.

And perhaps a bigger problem is that people study patient based QC by performing simulations and/or providing examples showing that retrospective analysis of a (known) problem (perhaps a clinician complaint) would have been detected by patient based QC.

But I am not aware of anyone routinely using patient based QC (with or without traditional based QC), for all assays.


Theranos Board now populated with past AACC presidents

August 10, 2016

redefine

Theranos has been criticized for its board, which has two former secretaries of state (Henry Kissinger and George Schultz), two former senators and several former high ranking military officers and not much in the way of scientific expertise. Now, their scientific and medical advisory board includes four former AACC presidents: Susan Evans, Ann Gronowski, Larry Kricka, and Jack Ladenson. Note that although clinical chemists have been added, the fact that past presidents have been chosen conforms to Theranos’s strategy of favoring “official” types.

So here’s a question – if you were a well-known clinical chemist, would you accept a position to serve on Theranos’s board?


More AACC 2016 Philadelphia Notes

August 4, 2016

aacc

As for the AACC 2016 app, it deconstructs the three program books into something pretty useless. With the physical books, one can page through them rather quickly. But with the app, most of the titles are cut off, so it takes forever to find things.

The posters were so far away, they seemed to be in a different zip code.

Several posters were interesting and I was impressed by a poster presented by Linda AC De Grande about the use of patient medians. Maybe one day QC using patients will be mainstream.

Although I’m no longer part of CLSI, I have attended CLSI meetings at the AACC national meeting since the 80s. But these meetings are not held any more at the national AACC meeting. This makes CLSI less inclusive since people can no longer simply drop in on the meetings.

Anyone who stayed at the Marriott (like me) was very happy since the convention center was a block away.

At a Siemens presentation about IQCP, it was implied that conducting IQCP might allow one to run QC less than once a month as long as there was no conflict with manufacturer’s IFU. Not sure if the presenter was correct but a scary thought nevertheless.


Theranos – Part 2

August 3, 2016

IMG_0709

I was among the multitudes who attended Elizabeth Homes’s presentation about Theranos at AACC in Philadelphia. Overall, I was impressed and here are some details. First, she said she wasn’t going to address past malfeasances (not the way she put it) but focus on Theranos’s new instrument.

As an aside, she had an identical accent to that of Mira Sorvino in “Romy and Michelle’s high school reunion”). For those who haven’t seen the movie, I would call this “adult valley girl”.

Her presentation included a lot of data analysis. Terms like ANOVA, Passing-Bablok regression, weighted Deming regression, CLSI guidelines EP05-A3 and EP09-A3, ATE (allowable total error) and others were pronounced and used correctly. (The ATE corresponded to CLIA limits). Having worked most of my career for manufacturers, there is a simple rule manufacturers never show bad data. Hence, until these data are reproduced by others….

The instrumentation was impressive from the standpoint that so many different assay types could fit in one relatively small box, but the technologies with which I am familiar were standard – nothing’s new. I don’t recall her mentioning any specific reagents. When you think about assays, reagents are the ballgame – the instrument is not that special. Something that did seem new was that the software for the instrument (the minilab) is in a central server. The advantages of this remain to be demonstrated.


IQCP – waste of time? No surprise

July 30, 2016

DSC01569edp

Having looked at a blog entry by the Westgards, which is always interesting, here are my thoughts.

Regarding IQCP, they say it’s mostly been a “waste of time”, an exercise of paperwork to justify current practices, with very little change occurring in QC practices.

This is no surprise to me – here’s why.

There are two ways to reduce errors.

FMEA (or similar programs) reduces the likelihood of rare but severe errors.

FRACAS (or similar programs) reduces the error rate of actual errors, some of which may be severe.

Here are the challenges with FMEA

  1. It takes time and personnel. There’s no way around this. If sufficient time is not provided with all of the relevant personnel present, the results will suffer. When the Joint Commission required every hospital to perform at least one FMEA per year, people complained that performing a FMEA took too much time.
  2. Management must be committed. (I was asked to facilitate a FMEA for a company – the meetings were scheduled during lunch. I asked why and was told they had more important things to do). Management wasn’t committed. The only reason this group was doing the FMEA was to satisfy a requirement.
  3. FMEA requires a facilitator. The purpose of FMEA is to challenge the ways things are done. Often, this means challenging people in the room (e.g., who have put systems in place or manage the ways things are done). This can create an adversarial situation where subordinates will not speak up. Without a good facilitator, results will suffer.
  4. The guidance to perform a FMEA (such as EP23) is not very good. Example: Failure mode is a short sample. The mitigation is to have someone examine each tube to ensure the sample volume is adequate. The group moves on to the next failure mode. The problem is that the mitigation is not new – it’s existing laboratory practice. Thus, as the Westgards say – all that has happened is the existing process has been documented. That is not FMEA. (A FMEA would enumerate the many ways that someone examining each sample could fail to detect the short sample).
  5. Pareto charts are absent in the guidance. But real FMEAs require Pareto charts.
  6. I have seen reports where people say their error rate has been reduced after they conducted a FMEA. But there are no error rates in a FMEA (errors rates are in a FRACAS). So this means no FMEA was carried out.
  7. And how anyone could say they have conducted a FMEA and conclude that it is ok to run QC monthly.

Here are the challenges with FRACAS

  1. FRACAS requires a process where errors are counted in a structured way (severity and frequency) and reports issued on a periodic basis. This requires knowledge and commitment.
  2. FRACAS also requires periodic meetings to review errors whereby problems are assigned to corrective action teams. Again, this requires knowledge and commitment.
  3. Absence of a Pareto chart is a flag that something is missing (no severity classification, for example).
  4. People don’t like to see their error rates.
  5. FRACAS requires a realistic (error rate) goal.

There are FRACAS success stories:

Dr. Peter Pronovost performed a FRACAS type approach on placing central lines and dropped the infection rate from 10% to 0 by the use of checklists.

In the 70s, the use of a FRACAS type approach reduced the error rate in anesthesiology instruments.

And FMEA failures

A Mexican teenager came to the US for a heart lung transplant. The donated organs were not checked to see if they were the right type. The patient died.


MU vs TE vs EG

July 29, 2016

DSC01755edp

Picture is aerial view from a Cirrus of Foxwoods casino in CT

MU=measurement uncertainty TE=total error EG=error grid

Having looked at a blog entry by the Westgards, which is always interesting, here are my thoughts.

To recall, MU is a “bottoms-up” way to model error in a clinical chemistry assay (TE uses a “top down” model) and EG has no model at all.

MU is a bad idea for clinical chemistry – Here are the problems with MU:

  1. Unless things have changed, MU doesn’t allow for bias in it modeling process. If a bias is found, it must be eliminated. Yet in the real world, there are many uncorrected biases in assays (calibration bias, interferences).
  2. The modeling required by MU is not practical for a typical clinical chemistry lab. One can view the modeling as having two major components: the biological equations that govern the assay (e.g., Michaelis Menten kinetics) and the instrumentation (e.g., the properties of the syringe that picks up the sample). Whereas clinical chemists may know the biological equations, they won’t have access to the manufacturer’s instrumentation data.
  3. The math required to perform the analysis is extremely complicated.
  4. Some of the errors that occur cannot be modeled (e.g., user errors, manufacturing mistakes, software errors).
  5. The MU result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
  6. So some people get the SD for a bunch of controls and call this MU – a joke.

TE has been much more useful than MU, but still has problems:

  1. The Westgard model for TE doesn’t account for some important errors, such as patient interferences.
  2. Other errors that occur (e.g., user errors, manufacturing mistakes, software errors) may be captured by TE but the potential for these errors are often excluded from experiments (e.g., users in these experiments are often more highly trained than typical users).
  3. Although both MU and TE rely on experimental data, TE relies solely on an experiment (method comparison or quality control). There are likely to be biases in the experiment which will cause TE to be underestimated. (See #2).
  4. The TE result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
  5. TE is often overstated e.g., the sigma value is said to provide a specific (numeric) quality for patient results. But this is untrue since TE underestimates the true total error.
  6. TE fails to account for the importance of bias. That is, one can have results that are within TE goals but can still cause harm due to bias. Klee has shown this as well as me. For example, bias for a glucose meter can cause diabetic complications but still be within TE goals.

I favor error grids.

EG

  1. Error grids still have the problem that they rely on experimental data and hence there may be bias in the studies.
  2. But 100% of the results are accounted for.
  3. There is the notion of increasing patient harm in EG. With either MU or TE, there is only the concept of harm vs no harm. This is not the real world. A glucose meter result of 95 mg/dL (truth=160 mg/dL) has much less harm than a glucose meter result of 350 mg/dl (truth=45 mg/dL).
  4. EG simply plots test vs. reference. There are no models (but there is no way to tell the origin of the error source).

What needs to be measured to ensure the clinical usefulness of an assay

July 19, 2016

DSC_1651edp

I was happy to see an editorial which IMHO states the required error components that need to be understood to ensure the clinical usefulness of an assay. Of course bias and imprecision are mentioned. But in addition, the author mentions freedom from interferences and pre and post analytical errors.

One can ask don’t interferences and pre and post analytical errors cause bias? Since the answer is yes, then why do these terms need to be mentioned if it was already stated that bias is to be measured. The reason is the way bias is measured in many cases will fail to detect the biases from interferences and pre and post analytical errors.

For example, if regression is used, average bias will be estimated, not the individual biases that can occur from interferences.

If σ is estimated, this usually involves bias measured from either regression or from quality control samples so again interference biases don’t get counted.

Finally, most of the studies are done in ways in which pre and post analytical errors have been minimized – the studies are performed outside of the routine way of processing patient samples. Hence, to ensure the clinical usefulness of an assay, one must construct protocols that measure all of the error components mentioned in the first paragraph.


Follow

Get every new post delivered to your Inbox.