Stakeholders that participate in performance standards

May 11, 2019

Performance standards are used in several ways: to gain FDA approval, to make marketing claims, and to test assays after release for sale that are in routine use.

Using glucose meters as an example…

Endocrinologists, who care for people with diabetes, would be highly suited to writing standards. They are in a position to know the magnitude of error that will cause an incorrect treatment decision.

FDA would also be suited with statisticians, biochemists, and physicians.

Companies through their regulatory affairs people know their systems better than anyone, although one can argue that their main goal is to create a standard that is as least burdensome as possible.

So in the case of glucose meters, at least for the 2003 ISO 15197 standard, regulatory affairs people ran the show.

Advertisements

Just published

May 8, 2019

The article, “Getting More Information From Glucose Meter Evaluations” has just been published in the Journal of Diabetes Science and Technology.

Our article makes several points. In the ISO 15197 glucose meter standard (2013 edition), one is supposed to prepare a table showing the percentage of results in system accuracy within 5, 10, and 15 mg/dL. Our recommendation is to graph these results in a mountain plot – it is a  perfect example of when a mountain plot should be used.

Now I must confess that until we prepared this paper, I had not read ISO 15197 (2013). But based on some reviewer comments, it was clear that I had to bite the bullet, send money to ISO and get the standard. Reading it was an eye opener. The accuracy requirement is:

95% within ± 15 mg/dL (< 100 mg/dL) and within ± 15% (> 100 mg/dL) and
99% within the A and B zones of an error grid

I knew this. But what I didn’t know until I read the standard is user error from the intended population is excluded from this accuracy protocol. Moreover, even the healthcare professionals performing this study could exclude any result if they thought they made an error. I can imagine how this might work: That result can’t be right…

In any case, as previously mentioned in this blog, in the section when users are tested, the requirement for 99% of the results to be within the A and B zones of an error grid was dropped.

In the section where results may be excluded, failure to obtain a result is listed since if there’s no result, you can’t get a difference from reference. But there’s no requirement for the percentage of times a result can be obtained. This is ironic since section 5 is devoted to reliability. How can you have a section on reliability without a failure rate metric?