Two examples of why interferences are important and a comment about a “novel approach” to interferences

September 29, 2017

I had occasion to read an open access paper “full method validation in clinical chemistry.” So with that title, one expects the big picture and this is what this paper has. But when it discusses analytical method validation, the concept of testing for interfering substances is missing. Precision, bias, and commutability are the topics covered. Now one can say that an interference will cause a bias and this is true but nowhere do these authors mention testing for interfering substances.

The problem is that eventually these papers are turned into guidelines, such as ISO 15197, which is the guideline for glucose meters. And this guideline allows 1% of the results to be unspecified (it used to be 5%). This means that an interfering substance could cause a large error resulting in serious harm in 1% of the results. Given the frequency of glucose meter testing, this translates to one potentially dangerous result per month for an acceptable (according to ISO 15197) glucose meter. If one paid more attention to interfering substances and the fact that they can be large and cause severe patient harm, the guideline may have not have allowed 1% of the results to remain unspecified.

I attended a local AACC talk given by Dr. Inker about GFR. The talk, which was very good had a slide about a paper about creatinine interferences. After the talk, I asked Dr. Inker how she dealt with creatinine interferences on a practical level. She said there was no way to deal with this issue, which was echoed by the lab people there.

Finally, there is a paper by Dr. Plebani, who cites the paper: Vogeser M, Seger C. Irregular analytical errors in diagnostic testing – a novel concept. (Clin Chem Lab Med 2017, ahead of print). Ok, since this is not an open access paper, I didn’t read it but what I can tell from Dr. Plebani comments, the cited authors have discovered the concept of interfering substances and think that people should devote attention to it. Duh! And particularly irksome is the suggestion by Vogeser and Seger of “we suggest the introduction of a new term called the irregular (individual) analytical error.” What’s wrong with interference?

Advertisements

HbA1c – use the right model, please

August 31, 2017

I had occasion to read a paper (CCLM paper) about HbA1c goals and evaluation results. This paper refers to an earlier paper (CC paper) which says that Sigma Metrics should be used for HbA1c.

So here are some problems with all of this.

The CC paper says that TAE (which they use) is derived from bias and imprecision. Now I have many blog entries as well as peer reviewed publications going back to 1991 saying that this approach is flawed. That the authors chose to ignore this prior work doesn’t mean the prior work doesn’t exist – it does – or that it is somehow not relevant – it is.

In the CC paper, controls were used to arrive at conclusions. But real data involves patient samples so the conclusions are not necessarily transferable. And in the CCLM paper, patient samples are used without any mention as to whether the CC paper conclusions still apply.

In the CCLM paper, precision studies, a method comparison, linearity, and interferences were carried out. This is hard to understand since the TAE model of (absolute) average bias + 2x imprecision does not account for either linearity or interference studies.

The linearity study says it followed CLSI EP6 but there are no results to show this (e.g., no reported higher order polynomial regressions). The graphs shown, do look linear.

But the interference studies are more troubling. From what I can make of it, the target values are given ± 10% bands and any candidate interfering substance whose data does not fall outside of these bands is said to not clinically interfere (e.g., the bias is less than absolute 10%). But that does not mean there is no bias! To see how silly this is, one could say if the average bias from regression was less than absolute 10%, it should be set to zero since there was no clinical interference.

The real problem is that the authors’ chosen TAE model cannot account for interferences – such biases are not in their model. But interference biases still contribute to TAE! And what do the reported values of six sigma mean? They are valid only for samples containing no interfering substances. That’s neither practical nor meaningful.

Now one could better model things by adding an interference term to TAE and simulating various patient populations as a function of interfering substances (including the occurrence of multiple interfering substances). But Sigma Metrics, to my knowledge cannot do this.

Another comment is that whereas HbA1c is not glucose, the subject matter is diabetes and in the glucose meter world, error grids are well known as a way to evaluate required clinical performance. But the term “error grid” does not appear in either paper.

Error grids account for the entire range of the assay. It seems that Sigma Metrics are chosen to apply at only one point in the assay.


Proposed improvements to the Diabetes Technology Society surveillance protocol

March 27, 2017

I previously blogged about flaws in the Diabetes Technology Society surveillance protocol. I turned this entry into a commentary which has been accepted and should appear shortly in the J Diabetes Sci Technol.


Antwerp talk about total error

March 12, 2017

Looking at my blog stats, I see that a lot of people are reading the total analytical error vs. total error post. So, below are the slides from a talk that I gave at a conference in Antwerp in 2016 called The “total” in total error. The slides have been updated. Because it is a talk, the slides are not as effective as the talk.

 

 

TotalError


Do manufacturers always publish that their glucose meter is best?

February 27, 2017

dsc00230edp

After reading an evaluation article where the conclusion was that the manufacturer’s glucose meter was best, I went through some journals to see how often this happens. I searched 2 journals through 2012-2016.

To be included in the list below, the article had to meet the following criteria:

  • The study was sponsored by a manufacturer
  • There were 2 or more meters, not all by made by the sponsor

The results are shown below. Eight articles met the criteria. Some articles were clear whereby the article’s conclusion was that the manufacturer’s glucose meter was best. In other articles, I had to look through the data. If the manufacturer’s glucose meter was best, the score was 1, if some other manufacturer’s glucose meter was best, the score was 0, and in one case, it was a tie so the score was 0.5. The N refers to the number of meters in the article.

Reference Company Meter Winner N Score

J Diabetes Sci Technol

2016 1316-1323 Sanofi BGStar / iBGStar BGStar / iBGStar 5 1
2015 1041-1050 Bayer Contour Contour Acc-Chek 4 0.5
2013 1294-1304 Bayer Contour Contour 5 1
2012 1060-1075 Roche Accuchek None declared best but Freestyle was best 43 0
2012 547-554 Abbot Optimum Xceed Optimum Xceed 6 1

Diabetes Technology and Therapeutics

2014 8-15 Bayer Contour Contour 5 1
2014 113-122 Ypsomed Mylife Para / Mylife  Unio None declared best 12 0
2012 330-337 Abbot Freestyle Freestyle 5 1

So 69% of the time the manufacturer’s glucose meter was best.


Help with sigma metric analysis

January 27, 2017

dsc_0900edp

I’ve been interested in glucose meter specifications and evaluations. There are three glucose meter specifications sources:

FDA glucose meter guidance
ISO 15197:2013
glucose meter error grids

There are various ways to evaluate glucose meter performance. What I wished to look at was the combination of sigma metric analysis and the error grid. I found this article about the sigma metric analysis and glucose meters.

After looking at this, I understand how to construct these so-called method decision charts (MEDX). But here’s my problem. In these charts, the total allowable error TEa is a constant – this is not the case for TEa for error grids. The TEa changes with the glucose concentration. Moreover, it is not even the same at a specific glucose concentration because the “A” zone limits of an error grid (I’m using the Parkes error grid) are not symmetrical.

I have simulated data with a fixed bias and constant CV throughout the glucose meter range. But with a changing TEa, the estimated sigma also changes with glucose concentration.

So I’m not sure how to proceed.


The Diabetes Technology Society (DTS) surveillance protocol doesn’t seem right

January 16, 2017

CPPP.edp

The Diabetes Technology Society (DTS) has published a protocol that will allow a glucose meter to be tested to see if the meter meets the DTS seal of approval. This was instituted because for some FDA approved glucose meters, the performance of post release for sale meters from some companies did not meet ISO standards.

Before the DTS published their protocol, they published a new glucose meter error grid – the surveillance error grid.

But what I don’t understand is that the error grid is not part of the DTS acceptance criteria to gain the DTS seal of approval. (The error grid is plotted as supplemental material). Basically, to get DTS approval, one has to show that enough samples have differences from reference that fall within the ISO 15197:2013 standard. To be fair, the ISO standard and the “A” zone of the error grid have similar limits, but why not use the error grid, since the error grid was developed by clinicians whereas the ISO standard is weighted by industry members. And the error grid deals with results in higher zones.

Moreover, the DTS does not deal with outliers other than to categorize them – their presence does not disqualify a meter from getting DTS acceptance as long as the percentage of results within ISO limits is high enough.

So if a meter has a 1% rate of values that could kill a patient, it could still gain DTS seal of approval. This doesn’t seem right.