Published: Acute Versus Chronic Injury in Error Grids

July 19, 2014


This is a Letter to the editor (1) based on a new revision to the glucose meter error grid (2). The gist of the Letter is as follows. The glucose meter error grid procedure involved surveying clinicians as to what glucose levels would prompt them to treat patients. But this deals with symptoms or acute injury. If a glucose meter met these limits, one might think all is well. But diabetes also involves complications from continued elevated glucose. The Letter proposed a different error grid is required for chronic injury.


  1. Krouwer JS and Cembrowski GS Acute Versus Chronic Injury in Error Grids J Diabetes Sci Technol. Note: subscription may be required.
  2. Klonoff DC, Lias C, Vigersky R, et al The surveillance error grid. J Diabetes Sci Technol. 2014;8:658-672.

Why I’m losing interest in clinical chemistry

June 1, 2014


I had occasion to review a revision to the CLSI guidance document EP19. However, when I downloaded the document, out popped a revision to EP21 instead.

What I had pushed for in the revision of EP21 – before I got kicked off the subcommittee – is gone. EP21 is about total error, which in practice relates to the error that is allowed to occur in the experiment. If you have a bunch of regulatory affairs people on the subcommittee, you restrict the allowable error sources and things look great but don’t necessarily reflect reality.

In the glucose meter POCT world, the CLSI version of a glucose meter standard had the above and other limitations, about which I ranted. And then the FDA came out with its own draft guidance and said – fuhgetaboutit – meaning don’t use the CLSI standard, we’ve come out with our own standard. I talked about this before but note that the FDA draft guidance wants to see experiments performed in the hands of the intended users – unlike the revision of EP21 which goes out of its way to exclude this error source.

So it’s been so long fighting this battle, I have to say I’m losing interest.


April 9, 2014


In the last 2 months, I’ve been asked to conduct 5 reviews, all for different journals, to determine if a manuscript should be accepted or not. I performed 4 of the reviews, and declined to review one manuscript because the title and abstract alerted to me the fact that I probably wouldn’t understand one word of the paper. Before that, there was a 4 month period with no review requests, so you never know when the requests will occur.

Performing these reviews is the other side of the coin – I’ve submitted many papers of my own and read reviews of my papers. I know how it feels to have my own paper severely criticized, so I try to be gentle in my reviews when I see something wrong, but on the other hand I never have a problem in pointing out problems.

The review request contains the title and abstract; if you agree to perform the review you get the full paper. Many papers require revision which often means another review, where I can see how the authors responded to my comments.

Why GUM will never be enough

April 7, 2014


I occasionally come across articles that describe a method evaluation using GUM (Guide to the expression of Uncertainty in Measurement). These papers can be quite impressive with respect to the modeling that occurs. However, there is often a statement that relates the results to clinical acceptability. Here’s why there is a problem.

Clinically acceptability is usually not defined but often implied to be a method’s performance that will not cause patient harm due to assay error.

A GUM analysis usually specifies the location for 95% of the results. But if the analysis shows that the assay just meets limits, then 5% of the results will cause patient harm. Now according to GUM models, the 5% will be close to limits because the data are assumed to be Gaussian so this is a minor problem.

A bigger problem is that GUM analysis often ignores rare but large errors such as a rare interference or something more insidious such a user error that results in a large assay error. (Often GUM analyses don’t assess user error at all). These large errors, while rare, are associated with major harm or death.

The remedy is to conduct a FMEA or fault tree in addition to GUM to try to brainstorm how large errors could occur and whether mitigations are in place to reduce their likelihood. Unless risk analysis is added to GUM, talking about clinical acceptability is misleading.

Six comments about risk management for labs

March 24, 2014


Inspired by a post by Sten Westgard, here is my list on risk management for labs.

  1. One can apply simple risk management to before and after EQC. Before, many patient results were protected from many process faults because twice daily QC would pick up the fault in time for the results to be repeated. After EQC, the risk of reported wrong patient results was higher because there could be a month before a fault was detected. Thus, EQC never made sense.
  2. The comments in Sten’s posting that “it’s up to the lab director” are similar to CLSI statements about requirements in many of their evaluation protocol documents.
  3. The CLSI EP23 document about risk management for the lab was written by a group that was largely untrained in risk management. (This group had high expertise in other areas). Hence, the document is non-standard with respect FMEA and fault trees. Moreover, it focuses on analytical faults that have been largely validated by the manufacturer but the document neglects lab user error.
  4. Hospitals are required (at least they used to be) to perform at least one FMEA per year. In my experience in trying to provide software for this, the hospitals had little interest in actually performing a FMEA. Without guidance, training, and some prescriptive methods, risk management in labs is suspect.
  5. The situation wasn’t much different for in vitro diagnostic manufacturers. I’ve never met an engineer who willingly participated in risk management activities.
  6. The IHI (Institute for Healthcare Improvement) has a method for implementing FMEAs that is almost guaranteed to cause problems since it looks for a numerical reduction in “risk”. Take surgery as an example and I simplify things for illustration. You score severity and probability of occurrence of each event, multiply the severity x probability and add up for all events. For example, wrong site surgery would get severity=5 (the highest), probability=1 (the lowest) for a 5. Waiting more than an hour for an appointment would get severity=1 (the lowest), probability=5 (the highest) for a 5. BUT, in general you can’t change severity, only probability so in this case, you would try to change the appointment process and ignore the wrong site surgery. (The wrong site surgery probability is already at the lowest value of 1.) Your overall number would improve (in this case the initial 10 would be reduced) and you would declare victory. But in spite of the universal protocol (to prevent wrong site surgery), there is still room for improvement, so this IHI program focuses on less severe items and ignores the important ones.

What’s needed is training on standard methods in risk management for labs.

The ISO process and glucose meter standards

March 12, 2014


As readers probably know, I objected to the ISO standard 15197 for glucose meters issued in 2003 because 5% of the results were unspecified. My objections were made – in vain – before the standard was issued with emails to the ISO chairholder. But as it turns out, I wasn’t the only one who questioned the logic of having 5% of the results as unspecified. I came across this email from 2002 as I was cleaning up my PC. The response from the chairholder to the other person objecting was interesting. He said the comment had been brought up “too late in the process – in fact, outside the process.  If the comment had come in with the US vote, we would have been obliged to address it.”

It seems to me if the comment is valid it needs to be addressed – period.

Glucose modeling battles

March 11, 2014


There is an upcoming article in Clinical Chemistry accompanied by an editorial which features another Boyd and Bruns modeling of glucose errors. This time the paper’s focus is on measurement frequency for CGM (continuous glucose monitoring) monitors. BUT … the modeling is the same – namely the use of the Westgard model where average bias and imprecision are equated to total error.

Now I have an upcoming paper which tries to show the limitations of their approach – I realize that their paper was in the works before my paper appeared – so we’ll see if I have an impact. Previously, my efforts to show the limitations of this modeling were not successful in so far as Boyd and Bruns continued to publish papers using their model as is.

Note: Subscription(s) may be required to view these papers.


Get every new post delivered to your Inbox.