Proposed improvements to the Diabetes Technology Society surveillance protocol

March 27, 2017

I previously blogged about flaws in the Diabetes Technology Society surveillance protocol. I turned this entry into a commentary which has been accepted and should appear shortly in the J Diabetes Sci Technol.


Help with sigma metric analysis

January 27, 2017

dsc_0900edp

I’ve been interested in glucose meter specifications and evaluations. There are three glucose meter specifications sources:

FDA glucose meter guidance
ISO 15197:2013
glucose meter error grids

There are various ways to evaluate glucose meter performance. What I wished to look at was the combination of sigma metric analysis and the error grid. I found this article about the sigma metric analysis and glucose meters.

After looking at this, I understand how to construct these so-called method decision charts (MEDX). But here’s my problem. In these charts, the total allowable error TEa is a constant – this is not the case for TEa for error grids. The TEa changes with the glucose concentration. Moreover, it is not even the same at a specific glucose concentration because the “A” zone limits of an error grid (I’m using the Parkes error grid) are not symmetrical.

I have simulated data with a fixed bias and constant CV throughout the glucose meter range. But with a changing TEa, the estimated sigma also changes with glucose concentration.

So I’m not sure how to proceed.


The Diabetes Technology Society (DTS) surveillance protocol doesn’t seem right

January 16, 2017

CPPP.edp

The Diabetes Technology Society (DTS) has published a protocol that will allow a glucose meter to be tested to see if the meter meets the DTS seal of approval. This was instituted because for some FDA approved glucose meters, the performance of post release for sale meters from some companies did not meet ISO standards.

Before the DTS published their protocol, they published a new glucose meter error grid – the surveillance error grid.

But what I don’t understand is that the error grid is not part of the DTS acceptance criteria to gain the DTS seal of approval. (The error grid is plotted as supplemental material). Basically, to get DTS approval, one has to show that enough samples have differences from reference that fall within the ISO 15197:2013 standard. To be fair, the ISO standard and the “A” zone of the error grid have similar limits, but why not use the error grid, since the error grid was developed by clinicians whereas the ISO standard is weighted by industry members. And the error grid deals with results in higher zones.

Moreover, the DTS does not deal with outliers other than to categorize them – their presence does not disqualify a meter from getting DTS acceptance as long as the percentage of results within ISO limits is high enough.

So if a meter has a 1% rate of values that could kill a patient, it could still gain DTS seal of approval. This doesn’t seem right.

 


The updated FDA POCT glucose meter performance standard has a big problem

October 21, 2016

dsc02250edp

As readers may be aware, I have ranted against glucose meter standards for some time. Although the standards have many flaws, the most egregious one is the failure to specify 100% of the results. For POCT glucose meters, the CLSI standard C30-A2 (2003) adopted the ISO glucose meter standard 15197, which accounts for 95% of the results.

In 2013, CLSI updated its standard, now called POCT 12-A3 to include 98% of the results.

In 2014, FDA issued a draft POCT glucose meter guidance which covers 100% of the results.

But, now FDA has updated its POCT glucose meter guidance to cover only 98% of the results.

There’s no reason to allow 2% of the results to be unspecified – I don’t know why the FDA did this.


The Lone Dissenter

April 12, 2016

IMG_0621edp

The picture is a photo of Linda Thienpont receiving the Westgard quality award, presented by Jim Westgard. This was a highlight of the Antwerp meeting in which Linda’s contributions to laboratory medicine were recognized.

 

 

I was amused to see a photo on the Westgard blog about the Antwerp conference – Quality in the Spotlight. The photo is incidental to the blog content – it shows people holding up green cards with the exception of one person holding up a red card. It’s hard to see the person holding up the red card, but it’s me! So this was voting by the attendees to some questions asked by the convener – Henk Goldschmidt – at the end of the day’s session.

The question to which everyone agreed except me went something like – “should analytical variation always be less than biological variation”

So here’s my reason for dissenting.

The Ricos database for glucose, available on the Westgard website, lists the TAE for glucose at either 5.5% or 6.96%. Yet, the 2013 ISO 15197 performance standard for glucose meters is: TAE (95% of Results) ± 15 mg/dL below 100 mg/dL and ± 15% above 100 mg/dL. Hence, the answer to the question should analytical variation always be less than biological variation is no!

In my one man Milan response paper (subscription required) to the Milan conference, I had a section discussing the merits of biological variation vs. clinician opinion but dropped it in the final version. But this material was in the Antwerp conference – basically I said, I understand the rationale behind biological variation and it makes sense to me but I don’t see how biological variation can trump clinician opinion and glucose meters was the example I used.

I note in passing that Callum Fraser, the guru of biological variation, was in the audience – he presented earlier in the day a fabulous historical overview of biological variation. During his presentation I was nevertheless struck by some of the equations used for biological variation. For example, one of the equations was

CV < ½ CV within-subject biological variation

So why is it exactly 0.5? Why not 0.496 or 0.503? And how can it be 0.5 for all assays? Is there something about the 0.5 that is like pi?


How loss of market share affects standards

January 19, 2016

DSC00426edpTWhen the 2003 ISO standard for glucose meter performance was prepared, the regulatory affairs people of industry controlled the standard. The standard called for 95% of results above 75 mg/dL to be within a total error of ± 20%. The standard was said to be based on medical requirements – clearly it was not based on state of the art since glucose meters perform better.

A problem probably unforeseen by these regulatory people was that a bunch of new players entered the glucose meter market and of course had no trouble getting FDA approval – the FDA used the ISO standard in its approval process. The number of meter brands on the market grew from 32 in 2005 to 87 in 2014. And some of the new meters sold their strips at a much lower price than the major manufacturers. This caused the four major companies to lose some market share.

Industry still plays a dominant role in glucose meter standards, but it seems that the original regulatory affairs people are out. Now, industry is working with the Diabetes Technology Society to certify glucose meters under new performance standards. Thus, meters that have FDA approval will be tested according to the tighter 2013 ISO standard and only meters that pass will receive a seal of approval from the Diabetes Technology Society.

Klonoff DC, Lias C, Beck S Development of the Diabetes Technology Society Blood Glucose Monitor System Surveillance Protocol. J Diabetes Sci Technol, in press. available at http://dst.sagepub.com/content/early/2015/12/10/1932296815614587.full.pdf+html


Word doesn’t recognize measurand

October 1, 2015

KGONramp

I was working on a paper and decided to comply with the nomenclature expected by the journal and used the word “measurand.” The word was underlined as unknown by the dictionary used by “Word.” I went to an online version of the Merriam Webster dictionary and no match for found for measurand. So much for ISO nomenclature.