Minimum system accuracy performance criteria – part 2

February 13, 2019

I had occasion to read the ISO 15197:2013 standard about blood glucose meters Section 6.3.3 “minimum system accuracy performance criteria.

Note that this accuracy requirement is what is typically cited as the accuracy requirement for glucose meters.

But the two Notes in this section say that testing meters with actual users is tested elsewhere in the document (section 8). Thus, because of the protocol used, the system accuracy estimate does not account for all errors since user errors are excluded. Hence, the system accuracy requirement is not the total error of the meter but rather a subset of total error.

Moreover, in the user test section, the acceptance goals are different from the system accuracy section!

Ok, I get it. The authors of the standard want to separate two major error sources: error from the instrument and reagents (the system error) and errors caused by users.

But there is no attempt to reconcile the two estimates. And if one considers the user test as a total error test, which is reasonable (e.g., it includes system accuracy and user error), then the percentage of results that must meet goals is 95%. The 99% requirement went poof.

 

Advertisements

Minimum system accuracy performance criteria

February 13, 2019

I had occasion to read the ISO 15197:2013 standard about blood glucose meters and was struck by the words “minimum system accuracy performance criteria” (6.3.3).

This reminds me of the movie “Office Space”, where Jennifer Anniston, who plays a waitress, is being chastised for wearing just the minimum number of pieces of flair (buttons on her uniform). Sorry if you haven’t seen the movie.

Or when I participated in an earlier version of the CLSI method comparison standard EP9. The discussion at the time was to arrive at a minimum sample size. The A3 version says at least 40 samples should be run. I pointed out that 40 would become the default sample size.

Back to glucose meters. No one will report that they have met the minimum accuracy requirements. They will always report they have exceeded the accuracy requirements.

 


Published in the Journal of Diabetes Science and Technology

December 24, 2018

The article Reducing Glucose Meter Adverse Events by Using Reliability Growth With the FDA MAUDE Database is now online and here. 

Previous blog entries have mentioned the MAUDE database. The proposal is for manufacturers to use reliability growth to reduce glucose meter adverse events.


Glucose Meter User Error – it’s not a rare event

December 6, 2018

Glucose meter evaluations, which are tightly controlled studies, are numerous and always show the same results – virtually all of the values are within the A and B zones of a glucose meter error grid. But a recent paper (1) shows that in the real world things are different.

In this paper, nurses and aides in a hospital, performed glucose meter tests and repeated the test when they felt that for some reason the result they obtained was suspect. Note that in a hospital with a patient right next to the nurse, a glucose value of less than 40 mg/dL with an asymptomatic patient would be suspect.

The good news is that in this hospital, retesting suspect results is part of the quality plan. Often the retested result was normal (e.g., 100 mg/dL or so higher than the first result). But of interest is the frequency of repeats, which was 0.8% (over 600,000 tests in the study). This leads to several comments…

The number of glucose meter tests performed each year is in the billions. The result from this hospital study implies that the number of incorrect results is in the millions.

User error is not rare!

Typical glucose meter evaluations, which never have this problem, are biased. Those evaluations are not representative of the real world.

There is more of a challenge for lay users performing self-testing (no nurse is present). Especially since asymptomatic hypoglycemia can occur.

References

  1. What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses? Corl D, Yin T, Ulibarri M, Lien H, Tylee T, Chao J, and Wisse BE Journal of Diabetes Science and Technology 2018, Vol. 12(5) 985–991

Published in Journal of Diabetes Science and Technology

October 17, 2018

 

Why the Details of Glucose Meter Evaluations Matters https://doi.org/10.1177/1932296818803113


Parallel universes: Glucose meter evaluations and the FDA MAUDE database

September 11, 2018

I’ve been looking at the FDA adverse event database – MAUDE. For the first 7 months of 2018, there are over 500,000 adverse events, across all medical devices. There are just over 10,000 adverse events for glucose meters. Other diagnostic tests appear as well but it’s not surprising that glucose meters dominate, since billions of glucose meter tests are performed each year. Note that one person testing themselves three times daily yields over 1,000 tests per year and there are millions of people like this.

About 10% of the reported glucose meter adverse events are designated as injury with the rest designated as malfunction.

Published glucose meter evaluations and the MAUDE database are like parallel universes. Published glucose meter evaluations are controlled studies usually conducted by healthcare professionals where results that could be called adverse events occur rarely if at all. On the other hand, the MAUDE database contains only adverse events which are unverified and often unverifiable, and is usually conducted by lay people with a range of proficiencies.

Another huge difference is that  glucose meter evaluations are a tiny sample of the population of glucose meter results. On the other hand, the MAUDE database is the population glucose meter results (although there are missing values because not all adverse events are reported).

Glucose meter evaluations appear often in the literature. There’s very little discussion about the MAUDE database.


Performance specifications, lawsuits, and irrelevant statistics

March 11, 2018

Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.

I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.

As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.

That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.

Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!

A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.

And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).