Calculating measurement uncertainty and GUM

October 16, 2017

A recent article (subscription required) suggests how to estimate measurement uncertainty for an assay to satisfy the requirements of ISO 15189.

As readers may know, I am neither a fan of ISO nor measurement uncertainty. The formal document, GUM – The Guide to the Expression of Uncertainty in Measurement will make most clinical chemists heads spin. Let’s review how to estimate uncertainty according to GUM.

  1. Identify each item in an assay that can cause uncertainty and estimate its imprecision. For example a probe picks up some patient sample. The amount of sample taken varies due to imprecision of the sampling mechanism.
  2. Any bias found must be eliminated. There is imprecision in the elimination of the bias. Hence bias has been transformed into imprecision.
  3. Combine all sources of imprecision into a BHE (big hairy equation – my term, not GUMs).
  4. The final estimate of uncertainty is governed by a coverage factor. Thus, an uncertainty interval for 99% is wider than one for 95%. Remember that an uncertainty interval for 100% is minus infinity to plus infinity.

The above Clin Chem Lab Med article calculates uncertainty by mathematically summing imprecision of controls and bias from external surveys. This is of course light years away from GUM. The fact that the authors call this measurement uncertainty could confuse some to think that this is the same as GUM.

Remember that in the authors’ approach, there are no patient samples. Thus, the opportunity for errors due to interferences has been eliminated. Moreover, patient samples can have errors that controls do not. Measurement uncertainty must include errors from the entire measurement process, not just the analytical error.

Perhaps the biggest problem is that a clinician may look at such an uncertainty interval as truth, when the likely true interval will be wider and sometimes much wider.

Advertisements

Two examples of why interferences are important and a comment about a “novel approach” to interferences

September 29, 2017

I had occasion to read an open access paper “full method validation in clinical chemistry.” So with that title, one expects the big picture and this is what this paper has. But when it discusses analytical method validation, the concept of testing for interfering substances is missing. Precision, bias, and commutability are the topics covered. Now one can say that an interference will cause a bias and this is true but nowhere do these authors mention testing for interfering substances.

The problem is that eventually these papers are turned into guidelines, such as ISO 15197, which is the guideline for glucose meters. And this guideline allows 1% of the results to be unspecified (it used to be 5%). This means that an interfering substance could cause a large error resulting in serious harm in 1% of the results. Given the frequency of glucose meter testing, this translates to one potentially dangerous result per month for an acceptable (according to ISO 15197) glucose meter. If one paid more attention to interfering substances and the fact that they can be large and cause severe patient harm, the guideline may have not have allowed 1% of the results to remain unspecified.

I attended a local AACC talk given by Dr. Inker about GFR. The talk, which was very good had a slide about a paper about creatinine interferences. After the talk, I asked Dr. Inker how she dealt with creatinine interferences on a practical level. She said there was no way to deal with this issue, which was echoed by the lab people there.

Finally, there is a paper by Dr. Plebani, who cites the paper: Vogeser M, Seger C. Irregular analytical errors in diagnostic testing – a novel concept. (Clin Chem Lab Med 2017, ahead of print). Ok, since this is not an open access paper, I didn’t read it but what I can tell from Dr. Plebani comments, the cited authors have discovered the concept of interfering substances and think that people should devote attention to it. Duh! And particularly irksome is the suggestion by Vogeser and Seger of “we suggest the introduction of a new term called the irregular (individual) analytical error.” What’s wrong with interference?


Proposed improvements to the Diabetes Technology Society surveillance protocol

March 27, 2017

I previously blogged about flaws in the Diabetes Technology Society surveillance protocol. I turned this entry into a commentary which has been accepted and should appear shortly in the J Diabetes Sci Technol.


Help with sigma metric analysis

January 27, 2017

dsc_0900edp

I’ve been interested in glucose meter specifications and evaluations. There are three glucose meter specifications sources:

FDA glucose meter guidance
ISO 15197:2013
glucose meter error grids

There are various ways to evaluate glucose meter performance. What I wished to look at was the combination of sigma metric analysis and the error grid. I found this article about the sigma metric analysis and glucose meters.

After looking at this, I understand how to construct these so-called method decision charts (MEDX). But here’s my problem. In these charts, the total allowable error TEa is a constant – this is not the case for TEa for error grids. The TEa changes with the glucose concentration. Moreover, it is not even the same at a specific glucose concentration because the “A” zone limits of an error grid (I’m using the Parkes error grid) are not symmetrical.

I have simulated data with a fixed bias and constant CV throughout the glucose meter range. But with a changing TEa, the estimated sigma also changes with glucose concentration.

So I’m not sure how to proceed.


The Diabetes Technology Society (DTS) surveillance protocol doesn’t seem right

January 16, 2017

CPPP.edp

The Diabetes Technology Society (DTS) has published a protocol that will allow a glucose meter to be tested to see if the meter meets the DTS seal of approval. This was instituted because for some FDA approved glucose meters, the performance of post release for sale meters from some companies did not meet ISO standards.

Before the DTS published their protocol, they published a new glucose meter error grid – the surveillance error grid.

But what I don’t understand is that the error grid is not part of the DTS acceptance criteria to gain the DTS seal of approval. (The error grid is plotted as supplemental material). Basically, to get DTS approval, one has to show that enough samples have differences from reference that fall within the ISO 15197:2013 standard. To be fair, the ISO standard and the “A” zone of the error grid have similar limits, but why not use the error grid, since the error grid was developed by clinicians whereas the ISO standard is weighted by industry members. And the error grid deals with results in higher zones.

Moreover, the DTS does not deal with outliers other than to categorize them – their presence does not disqualify a meter from getting DTS acceptance as long as the percentage of results within ISO limits is high enough.

So if a meter has a 1% rate of values that could kill a patient, it could still gain DTS seal of approval. This doesn’t seem right.

 


The updated FDA POCT glucose meter performance standard has a big problem

October 21, 2016

dsc02250edp

As readers may be aware, I have ranted against glucose meter standards for some time. Although the standards have many flaws, the most egregious one is the failure to specify 100% of the results. For POCT glucose meters, the CLSI standard C30-A2 (2003) adopted the ISO glucose meter standard 15197, which accounts for 95% of the results.

In 2013, CLSI updated its standard, now called POCT 12-A3 to include 98% of the results.

In 2014, FDA issued a draft POCT glucose meter guidance which covers 100% of the results.

But, now FDA has updated its POCT glucose meter guidance to cover only 98% of the results.

There’s no reason to allow 2% of the results to be unspecified – I don’t know why the FDA did this.


The Lone Dissenter

April 12, 2016

IMG_0621edp

The picture is a photo of Linda Thienpont receiving the Westgard quality award, presented by Jim Westgard. This was a highlight of the Antwerp meeting in which Linda’s contributions to laboratory medicine were recognized.

 

 

I was amused to see a photo on the Westgard blog about the Antwerp conference – Quality in the Spotlight. The photo is incidental to the blog content – it shows people holding up green cards with the exception of one person holding up a red card. It’s hard to see the person holding up the red card, but it’s me! So this was voting by the attendees to some questions asked by the convener – Henk Goldschmidt – at the end of the day’s session.

The question to which everyone agreed except me went something like – “should analytical variation always be less than biological variation”

So here’s my reason for dissenting.

The Ricos database for glucose, available on the Westgard website, lists the TAE for glucose at either 5.5% or 6.96%. Yet, the 2013 ISO 15197 performance standard for glucose meters is: TAE (95% of Results) ± 15 mg/dL below 100 mg/dL and ± 15% above 100 mg/dL. Hence, the answer to the question should analytical variation always be less than biological variation is no!

In my one man Milan response paper (subscription required) to the Milan conference, I had a section discussing the merits of biological variation vs. clinician opinion but dropped it in the final version. But this material was in the Antwerp conference – basically I said, I understand the rationale behind biological variation and it makes sense to me but I don’t see how biological variation can trump clinician opinion and glucose meters was the example I used.

I note in passing that Callum Fraser, the guru of biological variation, was in the audience – he presented earlier in the day a fabulous historical overview of biological variation. During his presentation I was nevertheless struck by some of the equations used for biological variation. For example, one of the equations was

CV < ½ CV within-subject biological variation

So why is it exactly 0.5? Why not 0.496 or 0.503? And how can it be 0.5 for all assays? Is there something about the 0.5 that is like pi?