Two different meanings of an acceptable assay

January 29, 2020

For glucose meters or continuous glucose monitors, an error grid defines potential patient outcomes for results compared to reference. Results in zone A cause no harm, even though results at the outer edge of the A zone have noticeable error. Results in the B zone have the potential to cause slight harm whereas results in the E zone are potentially life threatening.

Results in the E zone are thus unacceptable – we would never want them to occur. But in the real world, they do occur.

So is this assay unacceptable by the FDA, meaning that the assay should be unavailable to users? The answer is no, the assay is acceptable because more harm would occur if the assay were removed.

Hence, the challenge is to focus on reducing the number of “unacceptable” results for an “acceptable” assay.


The problems with simulations

December 11, 2019

A recent article (subscription required) describes a simulation to determine how to meet guidelines for glucose meters. But this article is basically a clone of the Boyd and Bruns article from Clin Chem in 2001 that I debunked almost 20 years ago (1).

The problem with this simulation model is:

  1. It doesn’t account for interferences
  2. It doesn’t model user error, software errors, or manufacturing problems. These problems can’t be easily modeled.

There is a move within FDA to use “real world data” and “real world evidence.” There is a source for real world data, namely the adverse event database (often called MAUDE). In 2018, there were almost 20,000 adverse events for glucose meters. The authors’ simulation using favorable bias and precision numbers would predict “acceptable” glucose meters. But none of the 20,000 adverse events would be predicted.

Maybe, we also need “real world specs”

  1. Krouwer JS How to Improve Total Error Modeling by Accounting for Error Sources Beyond Imprecision and Bias, Clin. Chem., 47, 1329-30 (2001)

 


FDA’s secret database

November 7, 2019

Ok, the title’s a little dramatic. I’ve been exploring the adverse event database (MAUDE) for glucose meters, which is a publicly available database. But some records were a summary of events. This shouldn’t be since individual events are supposed to be entered.

What happened was there was an alternative summary reporting (ASR) system in place, which was not available to the public.

This was exposed in March 2019 by some excellent reporting by Christina Jewett.

In June of 2019, the FDA described ASR and said it has ended with all data now in MAUDE (hence the summary data I found). They start out by saying “In the spirit of promoting public transparency...” Well to be transparent, don’t have secret databases in the first place! The problem is there was never any reasonable explanation for why this data was not available to the public. For example, one reason stated was “ASR reports were not made publicly available because they were not submitted in a format compatible with the public database.” Ok, so change the format!

The real basis for the ASR database is still a mystery.


Manufacturers classify events differently

October 20, 2019

In continuing to analyze the adverse event database – this time for continuous glucose monitors (CGM), I noticed something very strange. There are two major manufacturers each with their latest product. This is the result. To recall, events are classified as either M=malfunction, IN=injury, or D=death.

Manufacturer Percent events classified as injury
A 94%
B 0.2%

But the text for these events is similar. How can this be?


Analyzing the adverse event database

October 7, 2019

Looking at glucose meter adverse events from the FDA database, an SQL query showed that 73% of the time, a form of the verb allege was used by manufacturers to describe the event, as in “the user alleged that …” This is I guess one way of acknowledging that these events are unverified.


Flaws in the ISO 15197 standard (for glucose meters)

August 13, 2019

Having an occasion to read the ISO 15197 standard (for glucose meters) I notice the statements:

One of the reasons allowed to discard data is: “the blood-glucose monitoring system user recognizes that an error was made and documents the details”

This makes ISO a biased standard because in the real world there will be user error which generates outlier data.

And compounding things is this statement:

“Outlier data may not be eliminated from the data used in determining acceptable system accuracy, but may be excluded from the calculation of parametric statistics to avoid distorting estimates of central tendency and dispersion.”

The problem is outliers that are representative of what happens in the real world should not be thrown out to help statistics such as regression and precision from being distorted. Rather these statistics should not be used. An error grid is a perfectly adequate statistic to handle 100% of the data.


Stakeholders that participate in performance standards

May 11, 2019

Performance standards are used in several ways: to gain FDA approval, to make marketing claims, and to test assays after release for sale that are in routine use.

Using glucose meters as an example…

Endocrinologists, who care for people with diabetes, would be highly suited to writing standards. They are in a position to know the magnitude of error that will cause an incorrect treatment decision.

FDA would also be suited with statisticians, biochemists, and physicians.

Companies through their regulatory affairs people know their systems better than anyone, although one can argue that their main goal is to create a standard that is as least burdensome as possible.

So in the case of glucose meters, at least for the 2003 ISO 15197 standard, regulatory affairs people ran the show.