A comment about terms used in EP5-A3 and bias

December 11, 2014

 

KPWM5edp

I have the new version of EP5-A3, which is CLSI’s document about precision. Having been kicked out of CLSI, I was loathe to buy it but if one is consulting in evaluating assays, it’s required.

As I read through the document, one note on terminology – this was in the A2 version as well – the use of the term “total precision” has been dropped and replaced with either “within laboratory precision” or “within device precision.”

All three terms have issues – the replacement does not solve these issues. The problem is that whichever term one is using does not account for all sources of error, which is implied in the terms. In an experiment such as EP5, the goal is to randomly sample sources of imprecision from the population of interest. Take reagents for example. The study may use one reagent or in many cases in industry – three or more reagents. But these reagents are not a random sample from the population of reagents – that’s of course impossible, because for a new assay, there are often only a few reagents that have been made and future reagents don’t exist. Are future reagents the same? That’s hard to say as raw materials change, vendor and manufacturing procedures change, QC procedures for approving lots change, personnel change, and so on.

The same could be said for the 20 days. Say the assay’s projected life is 10 years. One cannot randomly select 20 days from all future 20 day sequences in the 10 years – one is stuck with the 20 days that are current.

Formally, these are forms of bias and thus the EP5 protocol is biased. This is not some bad, deliberate bias – it is unavoidable bias, but bias nevertheless.

So in reality, the EP5 experiment is estimating precision based on the error sources that are allowed to be in the experiment. Whatever term is used: “total precision”, “within laboratory precision” or within device precision”, it is likely that precision has been underestimated.

Advertisements

More glucose fiction

December 1, 2014

shock

In the latest issue of Clinical Chemistry, there are two articles (1-2) about how much glucose meter error is ok and an editorial (3) which discusses these papers. Once again, my work on this topic has been ignored (4-12). Ok, to be fair not all of my articles are directly relevant but the gist of my articles and particularly reference #10 is that if you use the wrong model, the outcome of a simulation is not relevant to the real world.

How are the authors’ models wrong?

In paper #1, the authors’ state: “The measurement error was assumed to be uncorrelated and normally distributed with zero mean…”

In paper #2, the authors state:” We ignored other analytical errors (such as nonlinear bias and drift) and user errors in this model.”

In both papers, the objective is to state a maximum glucose error that will be medically ok. But since the modeling omits errors that occur in the real world, the results and conclusions are unwarranted.

Ok, here’s a thought people – instead of simulations based on the wrong model, why not construct simulations based on actual glucose evaluations. An example of such study is: Brazg RL, Klaff LJ, Parkin CG. Performance variability of seven commonly used self-monitoring of blood glucose systems: clinical considerations for patients and providers. J Diabetes Sci Technol. 2013;7:144-152. Given sufficient method comparison data, one could construct an empirical distribution of differences and randomly sample from it.

And finally, I’m sick of seeing the Box quote (reference 3): “Essentially, all models are wrong, but some are useful.” Give it a rest – it doesn’t apply here.

 

  1. Malgorzata E. Wilinska and Roman Hovorka Glucose Control in the Intensive Care Unit by Use of Continuous Glucose Monitoring: What Level of Measurement Error Is Acceptable? Clinical Chemistry 2014; v. 60, p.1500-1509.
  2. Tom Van Herpe, Bart De Moor, Greet Van den Berghe, and Dieter Mesotten Modeling of Effect of Glucose Sensor Errors on Insulin Dosage and Glucose Bolus Computed by LOGIC-Insulin Clinical Chemistry 2014; v. 60, p.1510-1518.
  3. James C. Boyd and David E. Bruns Performance Requirements for Glucose Assays in Intensive Care Units Clinical Chemistry 2014; v. 60, p.1463-1465
  4. Jan S. Krouwer: Wrong thinking about glucose standards. Clin Chem, 2010;56:874-875.
  5. Jan S. Krouwer and George S. Cembrowski A review of standards and statistics used to describe blood glucose monitor performance. Journal of Diabetes Science and Technology, 2010;4:75-83.
  6. Jan S. Krouwer: Analysis of the Performance of the OneTouch SelectSimple Blood Glucose Monitoring System: Why Ease of Use Studies Need to Be Part of Accuracy Studies. Journal of Diabetes Science and Technology, 2011;5:610-611.
  7. Jan S. Krouwer: Evaluation of the Analytical Performance of the Coulometry-Based Optium Omega Blood Glucose Meter: What Do Such Evaluations Show? Journal of Diabetes Science and Technology, 2011;5:618-620.
  8. Jan S. Krouwer: Why specifications for allowable glucose meter errors should include 100% of the data. Clinical Chemistry and Laboratory Medicine, 2013;51:1543-1544.
  9. Jan S. Krouwer: The new glucose standard, POCT12-A3 misses the mark. Journal of Diabetes Science and Technology, 2013;7:1400-1402.
  10. Jan S. Krouwer: The danger of using total error models to compare glucose meter performance. Journal of Diabetes Science and Technology, 2014;8:419-421.
  11. Jan S. Krouwer and George S. Cembrowski: Acute Versus Chronic Injury in Error Grids. Journal of Diabetes Science and Technology, 2014;8:1057.
  12. Jan S. Krouwer and George S. Cembrowski. The chronic injury glucose error grid. A tool to reduce diabetes complications. Journal of Diabetes Science and Technology, in press (available online)