March 12, 2017
Looking at my blog stats, I see that a lot of people are reading the total analytical error vs. total error post. So, below are the slides from a talk that I gave at a conference in Antwerp in 2016 called The “total” in total error. The slides have been updated. Because it is a talk, the slides are not as effective as the talk.
January 27, 2017
I’ve been interested in glucose meter specifications and evaluations. There are three glucose meter specifications sources:
FDA glucose meter guidance
glucose meter error grids
There are various ways to evaluate glucose meter performance. What I wished to look at was the combination of sigma metric analysis and the error grid. I found this article about the sigma metric analysis and glucose meters.
After looking at this, I understand how to construct these so-called method decision charts (MEDX). But here’s my problem. In these charts, the total allowable error TEa is a constant – this is not the case for TEa for error grids. The TEa changes with the glucose concentration. Moreover, it is not even the same at a specific glucose concentration because the “A” zone limits of an error grid (I’m using the Parkes error grid) are not symmetrical.
I have simulated data with a fixed bias and constant CV throughout the glucose meter range. But with a changing TEa, the estimated sigma also changes with glucose concentration.
So I’m not sure how to proceed.
November 15, 2016
Recently, I alerted readers to the fact that the updated FDA POCT glucose meter standard no longer specifies 100% of the results.
So I submitted a letter to the editor to the Journal of Diabetes Science and Technology.
This letter has been accepted – It seemed to take a long time for the editors to decide about my letter. I can think of several possible reasons:
- I was just impatient – the time to reach a decision was average
- The editors were exceptionally busy due to their annual conference which just took place.
- By waiting until the conference, the editors could ask the FDA if they wanted to respond to my letter.
I’m hoping that #3 is the reason so I can understand why the FDA changed things.
September 24, 2016
The picture shows a possible stranded sea creature at low tide, taken from 3,500 feet.
I was talking to a colleague about a project I’m working on and in order to explain, I asked him if he was familiar with glucose error grids. He said no, which surprised me. My colleague has been developing immunoassay reagents for a long time and while development and not evaluation is his specialty, as part of development, one must prove precision and accuracy.
I took this to mean that the concept of error grids is not that well known outside of diabetes. This is unfortunate, since error grids make more sense to me than total error, measurement uncertainty, or separate requirements for precision and accuracy.
July 29, 2016
Picture is aerial view from a Cirrus of Foxwoods casino in CT
MU=measurement uncertainty TE=total error EG=error grid
Having looked at a blog entry by the Westgards, which is always interesting, here are my thoughts.
To recall, MU is a “bottoms-up” way to model error in a clinical chemistry assay (TE uses a “top down” model) and EG has no model at all.
MU is a bad idea for clinical chemistry – Here are the problems with MU:
- Unless things have changed, MU doesn’t allow for bias in it modeling process. If a bias is found, it must be eliminated. Yet in the real world, there are many uncorrected biases in assays (calibration bias, interferences).
- The modeling required by MU is not practical for a typical clinical chemistry lab. One can view the modeling as having two major components: the biological equations that govern the assay (e.g., Michaelis Menten kinetics) and the instrumentation (e.g., the properties of the syringe that picks up the sample). Whereas clinical chemists may know the biological equations, they won’t have access to the manufacturer’s instrumentation data.
- The math required to perform the analysis is extremely complicated.
- Some of the errors that occur cannot be modeled (e.g., user errors, manufacturing mistakes, software errors).
- The MU result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
- So some people get the SD for a bunch of controls and call this MU – a joke.
TE has been much more useful than MU, but still has problems:
- The Westgard model for TE doesn’t account for some important errors, such as patient interferences.
- Other errors that occur (e.g., user errors, manufacturing mistakes, software errors) may be captured by TE but the potential for these errors are often excluded from experiments (e.g., users in these experiments are often more highly trained than typical users).
- Although both MU and TE rely on experimental data, TE relies solely on an experiment (method comparison or quality control). There are likely to be biases in the experiment which will cause TE to be underestimated. (See #2).
- The TE result is typically reported as the location of 95% of the results. But one needs to account for 100% of the results.
- TE is often overstated e.g., the sigma value is said to provide a specific (numeric) quality for patient results. But this is untrue since TE underestimates the true total error.
- TE fails to account for the importance of bias. That is, one can have results that are within TE goals but can still cause harm due to bias. Klee has shown this as well as me. For example, bias for a glucose meter can cause diabetic complications but still be within TE goals.
I favor error grids.
- Error grids still have the problem that they rely on experimental data and hence there may be bias in the studies.
- But 100% of the results are accounted for.
- There is the notion of increasing patient harm in EG. With either MU or TE, there is only the concept of harm vs no harm. This is not the real world. A glucose meter result of 95 mg/dL (truth=160 mg/dL) has much less harm than a glucose meter result of 350 mg/dl (truth=45 mg/dL).
- EG simply plots test vs. reference. There are no models (but there is no way to tell the origin of the error source).
April 13, 2016
I need to speak up due to a summary made by Jim Westgard regarding my talk in the Quality in the Spotlight Conference from Antwerp.
- Jim referred to my presentation where I said the “total” in total analytical error left out too many errors. Jim suggested I referred to pre-pre-analytical errors among others but I definitely stated that analytical errors are also left out of the Westgard total error model. I’m not sure what a pre- pre- analytical error is anyway. It is true that there are some rare errors that will be very difficult for a lab to detect, such as software errors or manufacturing mistakes.
- Jim suggested that total analytical error (e.g., the Westgard model) is broader than separate estimates of precision and bias. I don’t see how.
- He said that labs don’t want more complex equations / models. I’m sure this is true but what our company did was even simpler than the Westgard model – we simply looked at the difference from candidate minus the comparison method for all data. There were no models. The data were ranked to show the error limits achieved by 95% and 100% of the data. Not being constrained by models makes things simple.
- Jim said that ISO 15189 does not require uncertainty measurement that includes pre- and post- analytical error. That may be, but it doesn’t make it right.
April 5, 2016
Some more thoughts …
Anyone who’s ever looked at CAP summary statistics knows that CAP deletes outlier data as part of their process. One can view this in several ways…
From a statistics standpoint, it makes sense because the main parameter of interest is imprecision, which would be inflated by outlier data.
But the original goal of six sigma (which also requires a precise estimate of imprecision) was to be able to predict DEFECTS so why in the world would you delete the defects (outliers) that you wish to predict. From that standpoint, the analysis is biased.
Moreover, the outliers could in fact be real analytical problems although whatever their cause, they still are problems and because outliers are by definition large errors, these values could be associated with serious patient harm.
So this is another reason to favor error grids – which always include all data.