Glucose Meter User Error – it’s not a rare event

December 6, 2018

Glucose meter evaluations, which are tightly controlled studies, are numerous and always show the same results – virtually all of the values are within the A and B zones of a glucose meter error grid. But a recent paper (1) shows that in the real world things are different.

In this paper, nurses and aides in a hospital, performed glucose meter tests and repeated the test when they felt that for some reason the result they obtained was suspect. Note that in a hospital with a patient right next to the nurse, a glucose value of less than 40 mg/dL with an asymptomatic patient would be suspect.

The good news is that in this hospital, retesting suspect results is part of the quality plan. Often the retested result was normal (e.g., 100 mg/dL or so higher than the first result). But of interest is the frequency of repeats, which was 0.8% (over 600,000 tests in the study). This leads to several comments…

The number of glucose meter tests performed each year is in the billions. The result from this hospital study implies that the number of incorrect results is in the millions.

User error is not rare!

Typical glucose meter evaluations, which never have this problem, are biased. Those evaluations are not representative of the real world.

There is more of a challenge for lay users performing self-testing (no nurse is present). Especially since asymptomatic hypoglycemia can occur.

References

  1. What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses? Corl D, Yin T, Ulibarri M, Lien H, Tylee T, Chao J, and Wisse BE Journal of Diabetes Science and Technology 2018, Vol. 12(5) 985–991
Advertisements

Published in Journal of Diabetes Science and Technology

October 17, 2018

 

Why the Details of Glucose Meter Evaluations Matters https://doi.org/10.1177/1932296818803113


Parallel universes: Glucose meter evaluations and the FDA MAUDE database

September 11, 2018

I’ve been looking at the FDA adverse event database – MAUDE. For the first 7 months of 2018, there are over 500,000 adverse events, across all medical devices. There are just over 10,000 adverse events for glucose meters. Other diagnostic tests appear as well but it’s not surprising that glucose meters dominate, since billions of glucose meter tests are performed each year. Note that one person testing themselves three times daily yields over 1,000 tests per year and there are millions of people like this.

About 10% of the reported glucose meter adverse events are designated as injury with the rest designated as malfunction.

Published glucose meter evaluations and the MAUDE database are like parallel universes. Published glucose meter evaluations are controlled studies usually conducted by healthcare professionals where results that could be called adverse events occur rarely if at all. On the other hand, the MAUDE database contains only adverse events which are unverified and often unverifiable, and is usually conducted by lay people with a range of proficiencies.

Another huge difference is that  glucose meter evaluations are a tiny sample of the population of glucose meter results. On the other hand, the MAUDE database is the population glucose meter results (although there are missing values because not all adverse events are reported).

Glucose meter evaluations appear often in the literature. There’s very little discussion about the MAUDE database.


Performance specifications, lawsuits, and irrelevant statistics

March 11, 2018

Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.

I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.

As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.

That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.

Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!

A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.

And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).


Flash glucose monitoring

February 16, 2018

Here’s an article about flash glucose monitoring, a way for diabetic patients to avoid finger sticks and glucose monitors. Now I can understand why other glucose meter companies are trying to get out of the business. This product sounds like a game changer.

 

 


An observation from the ATTD glucose Conference

February 14, 2018

The 11th International Conference on Advanced Technologies and Treatments for Diabetes (ATTD) is underway in Vienna, Austria. The abstracts from the conference are available here. Here’s an interesting observation: I searched for the term MARD and it was found 48 times whereas the term error grid was found only 10 times. I published a paper describing problems with the MARD statistic and offered alternatives.


Comments about clinical chemistry goals based on biological variation – Revised Feb. 7, 2018

February 5, 2018

There is a recent article which says that measurement uncertainty should contain a term for biological variation. The rationale is that diagnostic uncertainty is caused in part by biological variation. My concerns are with how biological variation is turned into goals.

On the Westgard web site, there are some formulas on how to convert biological variation into goals and on another page, there is a list of analytes with biological variation entries and total error goals.

Here are my concerns:

  1. There are three basic uses of diagnostic tests: screening, diagnosis, and monitoring. It is not clear to me what the goals refer to.
  2. Monitoring is an important use of diagnostic tests. It makes no sense to construct a total error goal for monitoring that takes between patient biological variation into account. The PSA total error goal is listed at 33.7%. Example: For a patient tested every 3 months after undergoing radiation therapy, a total error goal of 33.7% is too big. Thus, for values of 1.03, 0.94, 1.02, and 1.33, the last value is within goals but in reality would be cause for alarm.
  3. The web site listing goals has only one goal per assay. Yet, goals often depend on the analyte value, especially for monitoring. For example the glucose goal is listed at 6.96%. But if one examples a Parkes glucose meter error grid, at 200 mg/dL, the error goal to separate harm from no harm is 25%. Hence, the biological goal is too small.
  4. The formulas on the web site are hard to believe. For example, I < 0.5 * within person biological variation. Why 0.5, and why is it the same for all analytes?
  5. Biological variation can be thought to have two sources of variation – explained and unexplained – much like in a previous entry where the measured imprecision could be not just random error, but inflated with biases. Thus, PSA could rise due to asymptomatic prostatitis (a condition that by definition that has no symptoms and could be part of a “healthy” cohort). Have explained sources of variation been excluded from the databases? And there can be causes of explained variation other than diseases. For example, exercise can cause PSA to rise in an otherwise healthy person.
  6. Biological variation makes no sense for a bunch of analytes. For example, blood lead measures exposure to lead. Without lead in the environment, the blood lead would be zero. Similar arguments apply to drugs of abuse and infectious diseases.
  7. The goals are based on 95% limits from a normal distribution. This leaves up to 5% of results as unspecified. Putting things another way, up to 5% of results could cause serious problems for an assay that meets goals.