Glucose Meter User Error – it’s not a rare event

December 6, 2018

Glucose meter evaluations, which are tightly controlled studies, are numerous and always show the same results – virtually all of the values are within the A and B zones of a glucose meter error grid. But a recent paper (1) shows that in the real world things are different.

In this paper, nurses and aides in a hospital, performed glucose meter tests and repeated the test when they felt that for some reason the result they obtained was suspect. Note that in a hospital with a patient right next to the nurse, a glucose value of less than 40 mg/dL with an asymptomatic patient would be suspect.

The good news is that in this hospital, retesting suspect results is part of the quality plan. Often the retested result was normal (e.g., 100 mg/dL or so higher than the first result). But of interest is the frequency of repeats, which was 0.8% (over 600,000 tests in the study). This leads to several comments…

The number of glucose meter tests performed each year is in the billions. The result from this hospital study implies that the number of incorrect results is in the millions.

User error is not rare!

Typical glucose meter evaluations, which never have this problem, are biased. Those evaluations are not representative of the real world.

There is more of a challenge for lay users performing self-testing (no nurse is present). Especially since asymptomatic hypoglycemia can occur.

References

  1. What Can We Learn From Point-of-Care Blood Glucose Values Deleted and Repeated by Nurses? Corl D, Yin T, Ulibarri M, Lien H, Tylee T, Chao J, and Wisse BE Journal of Diabetes Science and Technology 2018, Vol. 12(5) 985–991
Advertisements

New statistics will not help bad science

November 27, 2018

An article in Clinical Chemistry (1) refers to another article by Ioannidis (2) with a recommendation to change the tradition level of statistical significance for P values from 0.05 to 0.005.

The reasons presented for the proposed change make no sense. Here’s why

The first limitation is that P values are often misinterpreted …

If people misinterpret P values, then training needs to be improved, not changing P values!

The second limitation is that P values are overtrusted, when the P value can be highly influenced by factors such as sample size or selective reporting of data. 

Any introductory statistics textbook provides guidance on how to calculate the proper sample size for an experiment. Once again, this is a training issue. The second part of this reason is more insidious. If selective reporting of data occurs, the experiment is biased and no P value is valid!

The third limitation discussed by Ioannidis is that P values are often misused to draw conclusions about the research.

Another plea for training. And how will changing the level of statistical significance prevent wrong conclusions?

Actually, I prefer using confidence limits instead of P values but they provide no guarantees either. A famous example by Youden showed that for 15 estimates of the solar unit made from 1895 to 1961, each confidence interval did not overlap its predecessor.

References

  1. Hackenmueller, SA What’s the Value of the P Value? Clin Chem 2018;64:1675.
  2. Ioannidis JPA. The proposal to lower P value thresholds to .005. JAMA 2018;319:1429 –30.

Published in Journal of Diabetes Science and Technology

October 17, 2018

 

Why the Details of Glucose Meter Evaluations Matters https://doi.org/10.1177/1932296818803113


Questionable advice about when to report or suppress results that have been hemolyzed

September 27, 2018

I had occasion to read an article about advice about reporting or suppressing results when samples are hemolyzed. The article is here (available without a subscription).

Since the article was published in Clin Chem Lab Med, I sent in some thoughts to that journal as a letter. It was rejected the very next day with the reviewer not very happy with my letter. So in this blog entry, I will summarize the advice of this committee (EFLM) and my comments.

These days, most analyzers produce automatically a measurement of hemolysis – the H index. If one has performed an experiment to determine the effect of hemolysis on an analyte, one can subsequently approximate that effect by knowing the H index of a sample.

EFLM suggests that if the bias caused by hemolysis interference is greater than the RCV (reference change value), then the result should be suppressed. One would know this based on the reported H index. (The RCV represents a clinically significant error and if exceeded may cause the clinician to make an incorrect medical decision).

Here’s the problem – illustrated for the case where the assay CV=3% and the RCV=15%.

EFLM is allowing the hemolysis interference to take up 100% of the allowable error. But any result has at least three error sources: bias, interference(s), and imprecision. If one assumes that a result has no bias and the only interference is just below 15% and due to hemolysis, imprecision will still be present and cause (on average) the result be greater than the RCV 50% of the time. If one wanted to guarantee that 95% of the time, hemolysis interference didn’t cause the result to be greater than the RCV, the allowable limit for hemolysis interference would need to be 10.1% (15-(1.64×3)). And the 10.1% would still be optimistic because it assumes zero bias and zero interferences from other sources.

Manufacturers would never allocate 100% of allowable error to an interfering substance. Rather, manufacturers allocate error allowed by various error sources so that the total error is within goals. A rule of them that we used was any interference must have an effect less than 50% of the total allowable error.

So that’s a simple comment and I don’t see why the reviewer got so upset.


More wanderings in the MAUDE (FDA adverse event) database

September 17, 2018

I examined the first 7 months of 2018. Based on a predicted annual amount of events, there are:

993, 183 adverse events across all medical devices

9,713 (0.98%) deaths as the event type for the above number of adverse events.

Most of the events were cardiac related. There were a handful of events related to diagnostic assays but short of reading through almost 10,000 records, I may have missed a bunch. There did not seem to be any deaths related to glucose meters.


Comment about the interferences AACC webinar

September 13, 2018

I listened to the AACC webinar on interferences presented by David Grenache. He did a great job. But one thing that was presented struck me – the CLSI EP07-A2 definition of interferences –“a cause of clinically significant bias in the measured analyte concentration due to the effect of another component or property of the sample.” (Note – I have corrected a typo in Grenache’s presentation – no biggie).

This definition is bogus and conflicts with VIM (although the VIM definition is in tortured English – for example interference doesn’t appear, what’s defined is influence quantity).

Clearly, if a candidate interference substance can be detected, meaning that its presence affects the result, then the substance interferes.

Whether the interfering substance causes a clinically significant bias is a different question and shouldn’t be used as the definition.


Parallel universes: Glucose meter evaluations and the FDA MAUDE database

September 11, 2018

I’ve been looking at the FDA adverse event database – MAUDE. For the first 7 months of 2018, there are over 500,000 adverse events, across all medical devices. There are just over 10,000 adverse events for glucose meters. Other diagnostic tests appear as well but it’s not surprising that glucose meters dominate, since billions of glucose meter tests are performed each year. Note that one person testing themselves three times daily yields over 1,000 tests per year and there are millions of people like this.

About 10% of the reported glucose meter adverse events are designated as injury with the rest designated as malfunction.

Published glucose meter evaluations and the MAUDE database are like parallel universes. Published glucose meter evaluations are controlled studies usually conducted by healthcare professionals where results that could be called adverse events occur rarely if at all. On the other hand, the MAUDE database contains only adverse events which are unverified and often unverifiable, and is usually conducted by lay people with a range of proficiencies.

Another huge difference is that  glucose meter evaluations are a tiny sample of the population of glucose meter results. On the other hand, the MAUDE database is the population glucose meter results (although there are missing values because not all adverse events are reported).

Glucose meter evaluations appear often in the literature. There’s very little discussion about the MAUDE database.